Quote:
Originally Posted by fastlane
Is it I need to give every instrument it's own space in the mix??
This is something I am still working at but can never quite get it. Is there any trick to it or is it just trial, error and experience?? If anyone has any advise it would be much appricated
All the best
I think most engineers learn this by trial and failure. Instrument separation means letting the listener get attached to the mix. This can be used in combination with automation for adding attention points on the song to make the listener want to listen to it from the beginning to the end. I have attached a mix to this post to illustrate what I mean. When you listen to it, notice how you automatically start focusing on the bass line. It's Marcus Miller doing some nice bass guitar playing in this mix. The great thing about it is that it's easy to hear what he is playing, you don't need to spend any extra energy/focus in order to perceive what's beautiful in this mix, it automatically comes to the listener. And if you want to hear what happens in the rest of the rhtyhm element it's easy too. You have a few things in the left speaker and a few things in the right speaker. It sounds like two percussionists standing on one side each. That's what instrument separation is all about.
Pay attention to how I described the rhythm element, two percussionists, not one. Good instrument separation comes from capturing the band such that you can see the band playing in front of you. The stereo image is realistic. What I meant with trying to improve the instrument separation in your mix was to make it realistic sounding first of all. As you can hear the snare drum lies on top of the whole band. I see a big snare on top of the whole band and on the sides I see two groups of people standing very close to each other with one singer in each, singing the same thing. The bass guitarist is somewhere, but I can't really figure out where. The other guitarist is somewhere in the middle, quite far away. In a way it sounds powerful the way it sounds right now, the problem is that it's difficult to get really attached to the producer's ideas, something that is VERY important in professional mixes, it's more important than a "big sound".
How can you solve this? First of all you need to analyze what kind of material you've got in your hands. In this case you got a mix based on a lot of keyboard/software samples. Typically that means you are dealing with a quite bad signal when you start mixing. So you need to "clean it up". You can discuss this with the producer. Typically you use the mute button, you make a decision of how many elements you will leave in the mix playing at the same time and their type. In this case I think 3 light elements is the way to go: vocals, pad (strings,synth,guitar but only one of these simoultaneously), rhythm (bass and drums + added rhythm tracks). When you have these elements at hands it is pretty simple. You use the pan knobs for distributing the signal in the sound field, so that as little frequency masking as possible takes place. In the center you put the vocals, because the singer stands in the middle of the band. Then you add drums. The drummer has his drum kit placed slightly to the left of the singer. He has a huge drumkit so the high hat, one tom and some cymbals are on the right side of the singer. You can distribute it like this: L65 - R35. The bass guitarist is standing on the right side of the singer to the right of the drummer. His bass guitar sound fills up quite a lot of the stage: L25 - R75. There's also another guitarist. He stands quite far on the left side of the singer: L90 - L30. So now you've described where these people stand. Now you need to specify the depth so that the listener's brain can get attached to the different elements by locating it on the Z-axis. The singer stands in the front and kind of leans forward: boost a little on the high frequencies. The drummer is certainly in the back behind the vocalist, boost a little on the low frequencies. The bass guitarist stands somewhere in the middle, boost a little on the mid frequencies. The other guitarist stands besides the singer far at the front, boost a little high frequencies. Now you have specified in more detail where the different musicians are. But in order to be able to know precisely where they are you need to set the room they are in. Depending on what kind of room the instruments where tracked in you need to approach this in different ways. Typically the room is not big enough so you need to simulate the room. The singer stands in the middle where the distance to the ceiling is the biggest and the distance to the walls is the biggest. Add a room reverb and set it as wet as half the sound stage you need. The drums are farther away close to a wall, set the reverb more wet than the vocals and add a medium short delay. The bass is in the middle to the left of the singer, set the reverb more wet than the vocals but less wet than the drums and add a short delay on the L channel. The guitarist is at the front close to a wall, set a short delay on the right channel and a little reverb. Now you have described the band. The next thing is the sound composition. The audience wants to hear the singer clearly, maximize the volume. The audience wants to hear the drummer pretty good as well, set it almost as loud as the singer. The audience wants to hear guitar licks and such clearly, use automation on some parts and set the overall volume lower. The audience wants to hear the bass line clearly, set it as loud as the drummer.
So this is basically how you create instrument separation and it happens in the mixing process. The mastering engineer then processes the mix such that all of this image is preserved. For this reason he uses an M/S approach. The sum of all this is a clear and big mix that the listener gets attached to. Pay attention to WHEN you apply instrument separation. That takes place AFTER optimizing each instrument that was tracked. You always make each instrument sound the best in SOLO before you start locating the instruments in the sound field. By doing it the other way around you will end up with an unrealistic stereo image. It's also important to check for mono compatibility, so fine tune the track positioning in mono and strive to pan such that the less important instruments (the electric guitar in this case) starts sounding too loud. Compensate with EQ cutting on these tracks (if the instrument is at front, cut lows, if the instrument is in the middle cut highs and lows and if the instrument is at rear, cut highs, doing it the other way around will damage the stereo image). To further make the instrument separation even better, experiment a little.
Good luck with your mixing!