Some of you are quite interested in knowing what s we teach in the mixing and mastering module, a part of our Online Music Production PRO course which offers 3 months of live online training in Ableton Live. Some students usually have a basic understanding of the mixing process and all they want is to learn it more in-depth. Now, for those students who want to know what we offer in the mixing and mastering module, here is an overview.
Before we even start, for those of you who have no prior knowledge about mixing and mastering, let me give you a brief introduction to the world of mixing and mastering.
Importance of Mixing:
Firstly, we are going to talk about mixing. We all know the different stages of music production; the first stage is recording Vocals or Instruments. Once we have the recorded Vocals or Instruments or even both, we start with adding some samples, sounds into our DAW by using various third-party plugins, either for sound designing or for mixing as well.
The process of music production is very intriguing, but the way we interpret music is very different at the time of production and is very different at the time of mixing and mastering. The whole listening perception changes with respect to mixing/mastering.
Every element is in harmony with the other. mixing and mastering is not an easy task, with just a couple of tools, we could manage to achieve clarity, but it also majorly depends on the industry-standard quality.
There are certain hardware gears, which are used by the majority of the engineers in the industry, of which, if we could use the emulations of the same hardware, we could also get our mixes very close to the industry-standard quality. So, mixing is not just about dynamic processing (compression, expansion, gate, limiting) or effects processing (reverb, delay, chorus, flanger, phaser) or EQing.
It is the amalgamation of all of the things mentioned and knowing more hardware gears and their colors. We have devised the course in such a way that even a novice person who just started with music production could understand and make sense of every concept.
Mixing indeed is quite an essential part of music, the main essence of mixing is not only to bring clarity and balancing the instruments well but to also bring out the emotion of the track which a lot of audio engineers fail to do so. Because ultimately music is all about emotions and our connection to it is quite sacred.
These are convenient ways to go about mixing, the rest of the tips and tricks are some of those things which vary from person to person and that becomes their signature style of mixing an element. Some might suggest a few steps to reduce the redundancy in adding plugins but that doesn’t change the whole essence to mixing. So, let’s start.
Exporting the stems is going to be the first step of mixing, wherein all the tracks which have been used for production (audio samples, loops, or sound synthesized MIDI tracks) all need to be printed in the audio form so that the real-time effects are not in effect when we playing the track while mixing it. This is why it is important to print all the sounds in audio form. Now when you play the track, there are no time-based effects that are active (chorus, flanger, phaser, reverb, delay), which will make sure that there is no constructive or destructive interference occurring at any given point.
Importing the back into a new mixing session is very important, once you have imported the stems, the clerical work starts from here, like cutting the empty spaces of the audio files so you know exactly where the audio is and easy to localize the file. This will make your session look more organized and also pique your interest while mixing the track.
3. Faders Down:
Once we have done all the clerical work, we also need to give a bit of headroom, as a precautionary measure so that it doesn’t clip. For this we need to bring down all the faders of all the audio tracks and not the group tracks down to -14dbs. This will give you enough headroom space and will also maintain a safe distance from 0dbFS.
4. Instrument Balancing
Adjusting the levels of the audio tracks is very important. This is the key ingredient to make your entire mix cut through. Now, this is the biggest question which a lot of our students get, which is, how do we decide what is an ideal level for us to different elements? Also, how do we know what level should be the ideal level of the kick or the snare or any other element as well?
And this is honestly a tricky question and to be very honest, there is no single answer to this. There are methods that could act as a guide to help you with rough balancing, for instance, Pink Noise Method.
Use any Pink Noise generator on the master channel and keep the level of the Pink Noise generator around -14dbs. Then, take all the faders of the audio track down to -inf and start lifting every element’s fader one by one till the point where you can barely hear the sound cutting through the Pink Noise.
Pink Noise is equal energy per frequency, and it is also said to be the most musical noise to start with. This will not get the ideal balance of the mix, but will at least give you a rough idea. Another method is the anchor point method, which is quite similar to Pink Noise Method, but indifferent in only one way.
Instead of using Pink Noise as a ballpark to adjust levels, we use the key elements of the track (for instance: kick), and keeping that in mind, we adjust the levels of the other elements. This is also not going to give you a precise balance but will be very close to one. The best method to practice balancing is to follow a reference track, commercial tracks always sound better and give a better judgment of what should be levels of all the elements. Use reference to know the balancing between different elements, and adjust them accordingly to match the standards.
A lot of people aren’t aware of the fact that we can get a very wide stereo image just by using Mono Elements and panning them left and right. Also, automating the pan pots also helps in giving a movement to the mix and take away the monotony from it. Just panning everything is going to take listeners’ attention and may cause irritation at some point so you might not want to overdo this.
Knowing the sweet spot of saturation is the biggest challenge of mixing engineers because people tend to overdo things. The last thing you want to do is make everything sound distorted and then things start to sound harsh when you listen to your mix on headphones. Add subtle Saturation, just to get the warmth, add some color, and make the RMS value pop out a little.
7. Dynamic Processing:
Compression, Expansion, Gating, and Limiting, are the dynamic processors which are going to be used in this stage. Including the advanced compressors (FET, Tube, Opto, VCA, OTA) the compression techniques also play a vital role in it (serial or parallel compression). Not just compression but gates and expanders also play a vital role in this stage.
Additive and Surgical are the two methods of EQing which we use to make things sound better. We first apply surgical EQing, to take away the resonating or harsh frequencies if any exist (using Notch Filters) and then enhance some of the frequencies by using Analogue Modeled EQs (Neve 1073, API550, etc.)
9. Effects Processing:
This is where you are going to set up all the return tracks and start adding time-based processors here. Remember, while exporting the stems, make sure to turn off all the time-based effects from the inserts and also from the synth. There are certain tricks that you can use to make your reverberation sound much better (sidechain compression/decay modulation).
It is used to glue things altogether, especially added on the group tracks. This is one of the ways to ensure that no single element is overpowering the group of elements, and also to retain consistency throughout the entire track. A thumb rule of 4db to 5db of gain reduction can be tolerated but make sure not to over compress it.
As cheesy as it sounds, Analogue color is the most essential part of today’s sound that we hear from the industry-standard engineered tracks. It is impossible to do Analogue Summing in the digital domain, but we are only interested in getting the color from the plugin. VMR (Virtual Mix Rack) from Slate Digital and NLS from Waves, helps us achieve the same thing in our DAWs.
Make sure to select the sample Rate as 48KHz and 24bits of bit depth and export the file in “.wav” (RIFF) or “.aiff” Voila! Your mixing is pretty much done here, the next thing to do is to master the track. Let’s get onto that now, shall we?
Importance of Mastering:
A lot of people have a huge misconception about mastering music, they think that mastering is all about loudness but it isn’t. Making your track loud is a part of the mastering process but it is solely not that. The track also needs to be compatible with CD or Vinyl players, this is where making it compatible with the medium comes into the picture. For CD the Sample Rate should be 44.1KHz and bit depth should be 16bits.
- If not done in this way, the CD players won’t support the audio. Making it compatible with the streaming platforms is also very important and here comes the introduction to another meter called LUFS (Loudness Unit Full Scale). Different streaming platforms support different LUFS ranges for loudness normalization, but the ballpark should be around 8LUFS. Mastering is essential to make your track compatible with the medium, don’t forget that!
Clipping is important because there are some pesky peaks why may cause some distortion in the final stage of mastering once we are done limiting. To avoid those peaks touching the ceiling of the limiter, we intentionally clip the signal and, in that process, we also tend to increase a little bit of RMS value as well.
The final step to correct your mistake. If you get the urge of increasing the frequencies way too much (above/below 9db/-9db), it is advisable to solve the problem in the mixing stage itself rather than having it done in the mastering stage.
Multi-band compression is most essential to make sure that every band is consistent enough and no element in the pertaining frequency band is overpowering other elements in the same frequency band. A general rule of 2db of compression is acceptable. Beware of the compression that you do in this stage. Do not overdo it.
To increase even more RMS, we add very subtle Saturation here. Not much, just a little amount. Exciters play a vital role in this.
This reverb is used to glue everything together, obviously, we are not going to have a heavy reverb with a long tail. The dry/wet percentage is going to be around 14% to 15% and the decay time to be around 600ms. Make sure to cut the low frequencies inside the reverb and also to keep the pre-delay as short as possible.
Mp3 (CBR – 320kbps, AAC – 128kbps, Wav – 44.1KHz/24bits, are the audio formats which you should export and keep ready and also make sure that it sounds good even after the audio quality compression.
Stereo Imaging is very crucial and this is going to make sure if your track sounds much better on mono-compatible systems. Make sure to keep an eye on the correlation meter and then adjust the width. A thumb rule is, the correlation meter should not go below zero, in fact, it shouldn’t even cross 0.5 it is should always be in between 0.5 and 1.
Before increasing the gain using limiters, it is best to add a LUFS meter so that we make sure it doesn’t go above or below 8dbLUFS.
These are the steps that we follow and preach. This is an overview of the mixing and mastering module taught over a month as a part of our Online Music Production PRO course which offers 3 months of live online training conducted using Zoom. The course also covers music production, music theory, sound designing, and track arrangement in great detail.
The course also includes free Hooktheory membership for 3 months, 1-year free membership to recorded live sessions, and our latest offering “All Access Pass”, an online learning platform in offering recorded courses, audio podcasts, ear training games (in English, Hindi & Gujarati), access to forum and sample banks.
To know more about the mixing and mastering courses, please log on to https://www.loststoriesacademy.com/live-classes/.
Thank you so much for reading this one, I’ll see you in another blog.
I get this from a lot of people throughout the journey of teaching students and also taking seminars, people have all sorts of wild theories about why mixing/mastering engineers use dual monitor setup studio when they have one single monitor to do the same work for them?
I get your inquisitive indignation about the same thing but the point here is, there is no simple or an easy answer to this question, it gets really complex as to why, even today, engineers are still using two monitors set up and they do a really great job at getting the industry standard output from it.
A lot of engineers I have seen, don’t use dual monitor setup, quite often fail to deliver that industry-standard mix. The reasons may be numerous, tackling the root cause of the problem is the only way to get better mixes.
If the room is very well acoustically treated, then comes the problem of either ear training, whether the engineer who is working in the studio is very well trained. This is the main asset of any mixing/mastering engineer. The rest of everything falls right into the place, is what I used to think as well, but not really.
I have personally worked in so many big studios where a lot of engineers dream to work, and I thought having just a pair of monitors from a very well renowned company (like Genelecs, Dynaudio, and Fluid Dynamics) would solve the majority of my problem. But then; there is a problem of referencing it on multiple other audio systems; like the consumer audio speakers; the Bluetooth speakers, and even our smartphone speakers. To do this, I had to every time copy the final master to the flash drives and listen to the same output on multiple speaker systems, which again was a very time-consuming job if you’d ask me.
Not immediately but soon, Yamaha stopped the production of these monitors being produced any further, a lot of people say, the main reason was mainly that, people weren’t really liking the sonic quality of the sound which was reproduced via NS10.
But why was this the case in the first place? Let’s find out!
1. YAMAHA NS10
Majority of the consumer audio speaker systems are heavily colored. By the terms color, I mean the bias of the frequency an audio gear or equipment has when playing any track. So, the consumer audio systems were heavily colored, we could quite often time see the low frequencies (typically ranging from 30Hz to 150Hz) were being very boosted. Also a few brilliance frequencies around (6Khz to 15Khz). Because of this bias towards certain frequencies, the reference monitor speakers were no longer useful to us (us being the mixing/mastering engineers).
The main reason why NS10 and Auratone 5C were used was that these two monitors are completely flat and had no color bias in them. This was a crucial factor in determining how the song would sound when heard on the world’s most flat monitors. The thing about NS10 is, if you’d listen to your favorite track on these monitors, you would start hating those tracks. Because it sounds like chaos, every frequency is haywire, unbalanced, and does not sound, the way we intended it to sound.
See, the thing about having a flat frequency response monitor is, once you make it sound remotely close to how it would sound on most consumer audio systems, the chances are, you never have to check for different audio systems whether it sounds great or not, because it would sound great on almost all the speaker systems. This was the power of having a reference monitor.
This is the reason why the majority of the industry’s mixing and mastering engineers tried to get their hands on these monitor speakers. But sooner than ever; the reign of these reference monitors ended; because the majority of the Pro Audio speaker systems were now manufacturing the monitor speakers; which were remotely close to the consumer audio speakers.
And this step changed the entire revolution of having flat frequency bias monitors. We started getting more colored monitors, with low & high end of the frequency spectrum boosted.
By the time, the reference monitors had become obsolete and now, only very few of them have these reference speakers. In today’s times, if you ever visit a studio and see these pair of monitor speakers lying in the room, the chances are, the output which the studio is yielding have better chances of translating well on different monitoring systems because of the near-flat frequency response monitors. The best part about this is, without even going to the final stage or mixing and rendering the track to hear it on different speaker systems, you can do it right away by using a monitor switch.
Now, I completely understand that you have no budget in terms of buying or referencing on these monitors which is fair enough, not a lot of people start making money right out of their doorway immediately after completing their music production or mixing and mastering course. Yes, having monitors are great but even if you’re mixing it on a piece of studio headphones, you’d still be able to get that kind of Industry-level output, not as good as these pro engineers would, but definitely close. If you haven’t already, do check our Sonarwork Blog post in which, I have explained about different headphones and how do you use headphones for getting good mixes by mixing in the flat frequency response curve. Here are a couple of things you could start doing which would change the way your mix sounds right off the bat:
2. Mix in Mono :
This wouldn’t be the first time you’d be hearing this but honestly, this is such an underrated thing, because, in spite of the news out there; I have hardly seen anyone using this technique. If you haven’t heard about this, there are two simple things you could do to achieve your mixes that would make it sound better on Mono systems. One way to go about is by adding a plugin on the master channel and making the output completely mono. This would also make the mid-side balance well. For Ableton Live users; this is going to be a piece of the cake for you guys; all you need is to add a utility plugin on the master channel and click on the Mono button. You will definitely see at least 10% of improvement in your mixes if you’re going to follow this method.
3. Use Correlation Meters:
After mixing in Mono; you should probably add some stereo width or the side information to the audio; because it is going to sound very pale on the stereo systems; and you’re not giving the benefit of the side information to the people who’d be listening to your tracks on either headphones or dual speakers. While adding stereo width (there are tons of ways to do this) you can keep a look at the correlation meter on the master channel, so that, you don’t go overboard while adding the stereo width to the signal. Voxengo SPAN has a correlation meter which is also another free plugin to use on the Master channel; or you can also use Voxengo MSED; which also has the correlation meter within it.
4. Audified Mixchecker :
One of the best reference plugins I have ever used; it gives a close perception of how the mix is going to sound like on different audio systems. So, without exporting and checking it on different audio systems; you can directly use this plugin and see how it is going to sound on various systems like Car audio; tablets, smartphones, laptops, PC speakers, home theatre speakers, and many more.
I hope you guys enjoyed reading the blog and have a brief idea about the world of audio and why music mixing/mastering engineers use dual monitor setup.
I’ll see you guys in another one, thanks for reading this blog. Finally coming to the last point; all of these things are very crucial and important in terms of getting the final mix out; this will make sure that your song will translate well on different speaker systems, and hence; it is better that way.