The No.1 Website for Pro Audio
Mixing synthesizers into grindcore
Old 1 week ago
Here for the gear

Mixing synthesizers into grindcore

I'm trying to get my synth and guitar to both cut through the mix but I feel like they're competing for space in the mix and i'm curious if anyone has any ideas on how to improve. I'm having a hard time EQing them to play nice with each other and it always feels like one is always overpowering the other.
I'm using the Fredman technique with two sm 57s on the guitar cabinet and direct in to my interface with the synth.

I noticed a little bit of improvement after running the synth into a compressor but there is still a long way to go.

I am trying to do a synth+grind/powerviolence thing similar to The Locust and i am completely befuddled with how they got the mix to sound that coherent on plague soundscapes.

If anyone has any ideas they would be greatly appreciated.
Old 1 week ago
Automate the EQ so they take turns overpowering each other?

No clue. Mainly just posting to say that The Locust were great. That whole post- Swing Kids San Diego scene was killer for a while.
Old 1 week ago
Gear Maniac

As a deathcore fan i'm verry interested in the sound you are describing :p.

I would first of all take a look at what the different elements are playing. Synth and guitar can both be mid range heavey. If they are playing within the same range musically it will be hard to distinguish what they are doing.

if for exampel a synth plays the exact same melody as a guitar they wil start to sound as one instrument. which can be a good thing but if you want the synth to be heard as it's own element its better to have it playing an octave above the guitar.

if that's good i would put on a refference track that has a sound close to what you are looking for and compare the sound of the guitar and synth to youre mix.
the guitar ore synth will probably be less upfront on some parts giving space to the other. listen to youre refference track and listen to the elements that are not the point of attention when the synths are playing. and try to mimic that EQ.

I know that in grindcore everything needs to sound upfront all the time. but you can make space for othere elements to become the point of attention. ALOT of automation (EQ, Volume, Reverb, delay) is the key to a good metal sound.

Hop this helps!
Old 1 week ago
Lives for gear

First off, I'd suggest you master using one mic first before complicating things by using two mics.
When you use two mics on a single sound source you always have phase cancellation issues so you've already got frequency losses before you even begin to mix.

Once you've mastered getting ideal sound using one mic then you can learn how to calibrate mic distances using a scope so the sine waves are in sync to minimize signal losses.

Next your issue with getting the two musical parts to work together falls under the topic called 'FREQUENCY MASKING"
I suggest you do some googling on the topic to familiarize yourself with the problem so you know how to avoid it and minimize it when it does occur.

The essence of it is this. You cant have two instruments compete for the same frequency responses. All you do is wind up in a loudness war between the two parts.
I learned that lesson 50+ years ago when I played in my first band with another guitarist. Guitarists playing cover music often dial up the same tones as they both try and play the lead parts. You soon wind up in a battle of who can play the loudest as each player tries to be heard behind the masking. Eventually you learn that each guitar must cover its own frequency range in order to be properly heard. One can have scooped mids for example while the other has boosted mids, One more treble while the other has more bass, etc.

What also comes into play is the musical arrangement and timing of the notes. If the instruments are playing identical pitches with identical timing it gets much tougher to separate them into independent personalities. This is where harmony and chord inversions plays a role. If you have one guitar playing root chords, you may want to use chords a 3rd, 5th, 7th, and even an octave above to "Expand" the frequency range instead of trying to make everything fit into a narrow frequency range.

Keep this in mind too. A Musical note consists of a pitch. If its an electronic sine wave its frequency response is going to be 100% Fundamental. If its a musical instrument its likely to contain Overtones which are harmonic resonances above the fundamental note which "expand" the frequency response.
A string on a guitar may produce its 1st fundamental harmonic note at 440Hz (A Note) its 2nd fundamental is 880Hz, 3rd harmonic at 1320, 4th harmonic at 1760 etc. These harmonics expand the fundamental note and give the string its unique tone.

If you've even seen a guitar note on a scope before you'll recognize its far from being symmetrical or looking like a sine wave. Keyboards on the other hand are nearly always pristine and perfect, both halves of the wave tend to be even and even when harmonics are added they tend to be mathematically perfect, whereas a string tends to have many random influences.

Why is this important? Because you mentioned compressing the signal to try and make it work. The reason you only got so far and no farther compressing it is compression levels everything and eventually turns it into a square wave flattening the tops and destroying all the note harmonics which make the notes unique.
Its going to eventually stop sounding like a guitar string and start to sound like an inferior keyboard notes which as you found gets masked by a much stronger and highly symmetric keyboard note.

What you need to start doing is learning how to separate instruments using frequency manipulation (and musical arrangement)
Forget about stereo panning until you first learn how to utilize the frequencies between 20 to 20Khz effectively. If you need expert examples of this simply listen to any early Beatles recording and you'll hear how the masters could create so much space between parts using only one speaker.

Two tools that can help. The first is a frequency analyzer like Voxengo Span. Span is a dual channel, (or more for the pair version) frequency analyzer. You could set it up on the main Bus then pan your two parts hard left and right and then "see" the frequencies which have been tricking your ears. Then you can use an EQ to shift the two parts away from each other so one isn't masking the other so much and get the separation you desire.

This gives you the general idea of what I've been talking about here. You can see the peaks are separated to allow each their own bit of Frequency Turf. The Guitar has a scoop where the vocals are centered to prevent the guitar from basking the vocal part. Of course if you were to see these peaks while the music is playing, they wouldn't be fixed like this. They would move left and right depending on whatever musical "Pitch" is being produced. Something like a Kick drum doesn't normally change pitch during a song which is one reason most engineers mix the drums first then divide up the space that's left to the other instruments.

Another item to look at. The Yellow vocal part in this example extends way down to bass frequencies in the 30Hz and less range in this example. Given its very low in volume, it likely contains no real vocal information, simply low frequency noise. You can probably use a high pass filter and remove everything below 200Hz or so and it wont even be missed. What this would do is free up those low frequency clutter and make the bass and kick sound better.

In your case with a synth, guitar synth, bass and kick all fighting for those same low frequencies you'll need to do some serious examinations to see if you have the pitch range to separate parts and avoid masking. A Frequency analyzer should allow you to trim frequencies as like a surgeon with a scalpel to remove what's needed to allow the parts to be heard.

One thing you should Never Ever do is expect parts to song good solo after you start mixing them to fit together in a mix. each part is linked together like pieces of a jigsaw puzzle. A single piece of that puzzle doesn't tell you much about how the entire picture should look. You may wind up doing some serious damage to a part to force it to fit into the much greater picture. Its not uncommon to have for example a vocal part sound thin as all get out when you solo it only to have it sound fantastic within a large mix.

You cannot get an ideal mix taking each track, setting them for solo them making them sound as fat as you can get them. All that would do is make the parts overlap even more and cause the masking to increase.. The idea is to narrow the parts within a span that fits the instruments voicing. On guitar the effective range is between 200 to 5Khz for most mixes. Bass, 80 to maybe 2.5K, Drums can begin at 80 and end at 13K with the cymbals up top. They have hills and valleys in between that allow room to fit the other instruments in. If you have more instruments then you have to narrow up the maximum ranges. they cover.

After getting things to sit right in a mono mix by frequency you can go ahead and use stereo panning to get separation left and right and use Time based effects like Chorus, Flanger, Echo and Reverb to produce 3 dimensional depth (front to rear) Front stage, back stage within a mix. This last step cannot be done using headphones however. You have to use monitors in the open air in order to generate crossfeed so your external ears can judge actual distances. With headphones everything centers within your skull. There is no distance between you and the sound source so part of the masking may actually be the result of a two dimensional mix (stick figures) which only have height and width, but can never sound real because you are handicapped when it comes to depth.
Old 1 week ago
I'm trying to get my synth and guitar to both cut through the mix but I feel like they're competing for space in the mix and i'm curious if anyone has any ideas on how to improve
2 Things that every mix should do/have/incorporate:
Complimentary EQ and Creating a nice 3D stereo field. With these 2 things being incorporated into your mix, each instrument will live in its own frequency filed and reside in its own stereo field

With Complimentary EQ, you will make the instruments that take up the same frequency range and make them in their own frequency range by cutting and boosting certain frequencies.

With creating a 3D stereo field, you will be placing each instrument in their own space, like back left, back right, front center, middle 60% right, Middle 60% left, front left, front right, back center and so on
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.

Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump