The No.1 Website for Pro Audio
 All  This Thread  Reviews  Gear Database  Gear for sale     Latest  Trending
Can I use Logic Pro for Pro sound for video?
Old 17th December 2018
  #1
Gear Nut
 

Can I use Logic Pro for Pro sound for video?

I know a "can", but here's what I'm asking:

How does the process work?

How does a client deliver a video to you (say something like a 30 second or 1 minute thing), and in what format, and can Logic import that?

If I then compose music to the video how do I deliver it back typically? Do I just give them an audio mixdown, or raw audio tracks (stems?), or just the .mov format?

IOW, are they going to use my "complete" project, or are they going to just want audio they can mix in later?

What about sync'ing - I remember word clock and SMPTE - are these things to even worry about anymore? If I were to do something that was like an animated thing where you might want to have sounds sync'd to eye blinks and what not, how do I make the audio so they can sync it back up?

And does Logic have these capabilities?
Old 17th December 2018
  #2
Lives for gear
 
charlieclouser's Avatar
 

Logic has all the capabilities you describe. To answer your questions in order:

- The client will probably deliver to you a Quicktime movie, encoded using h.264 or ProRes codecs, in a .mp4 "container" that presents to the MacOS as a Quicktime movie. This can be directly imported into Logic, and displayed either full-screen on a separate monitor, in a floating window, or in a small pane at the upper left of Logic's Main (Arrange) Window. You set the SMPTE frame rate of your Logic Project to match that of the movie; this can be shown in the "Get Info" about the movie file. For video, 29.97 is most common; for film it's often 23.976. These are set under File>Project Settings>Synchronization tab in Logic. In that same tab you adjust "Bar Position XXX Plays at SMPTE YYY" so that the start of your music lines up with the first frame of image - usually this should NOT be Bar 1, better to use Bar 5 or Bar 9 or something so that you have "dead air" before the start of picture. If needed, adjust settings under File>Project Settings>Movie so that only a portion of the movie plays within your Logic Project - this is usually only needed if you are working on a small section in the middle of the movie.

- Formats described in previous paragraph; yes, Logic can import these directly with no conversion needed.

- To deliver the music back to the client you will definitely give them an audio mix down, and possibly (probably / always) also give them pre-mixed stems (drums, bass, synths, strings, brass, winds, choirs, etc.). But you probably will not give them individual "raw" audio tracks. Some folks in the record-making world use the word "stems" to describe individual audio tracks in a mix, like kick, snare, tom1, tom2, etc. This terminology is not used the same way in producing music for picture. In that world, a "stem" is a pre-mixed "sub-mix" of a bunch of related elements that want to be dealt with as a single brick, like "all the low percussion", "all the high percussion", "all the various guitar layers", etc. Each of these stems should be mixed, eq'ed, compressed, effected, whatever... so that when all of the stems are added together with their faders at zero, the final mix is heard. That way they can replicate the mix you were hearing, but have control over each group of sounds to, for instance, lower the guitars for a few seconds while leaving the drums as is. Note that working this way means that you can't easily put "mix buss" compressors or mastering plugins on your final mix - and should avoid even trying. Instead, put any "mix buss" processing INDIVIDUALLY on each stems' sub-master. In Logic this means routing individual tracks to a Bus, then creating an Aux Object whose source is that Bus and whose output is your main outs, and repeating this process for each "stem". That way, each of those Bus-fed-Aux Objects becomes a "stem sub-master" and you can apply processing to each separately.

- The client will never use, or want to see, your actual Logic Project that contains all your individual audio tracks, MIDI / soft synth tracks, plugins, etc. This is for your in-house use only. They only want a flexible layout of stems. In all likelihood they will be using Pro Tools to assemble all of the audio files of your music, the dialog, the sound effects, etc., so the audio files you give them will be imported into a Pro Tools session somewhere else. You should briefly discuss with whoever will be mixing your music against the other dialog and sound effects about how many stems they want and how to group the instruments together to make their job easiest and give them the flexibility they need. Maybe give them a stereo rough mix to refer to during this discussion, so they can say things like, "I don't mind of that small amount of percussion is all grouped together, but that wind-chime sound should be on its own stem" and stuff like that. While having this discussion you should also confirm exactly what specs the audio files you will give them should have - they will probably want 24-bit, 48kHz WAV files but you should absolutely confirm these tech specs with the client.

Note that almost all music for picture work occurs at 48k sample rate and NOT 44k as you would use for making CDs.

- To sync up the stems that you deliver with the picture, the client will need to know exactly where those audio files need to be placed on a timeline. It is crucial that ALL of your stems have the same start point - if one stem doesn't have any sounds in it for ten minutes, and then has a tiny little chunk of wind-chimes, well... fine - that stem will have ten minutes of silence at the start. It is common practice to put a "two pop" in each stem's audio file - this is a one-frame-long "beep" at 1kHz, usually at -18db below full-scale. In Logic you can make this by using an empty instance of EXS-24 - the default program in EXS produces a sine wave. Insert a one-frame-long MIDI note of C6 on a track containing an empty EXS, adjust so that its level appears around -18db and you should be good. I prefer to bounce this little beep to an audio file which makes it a little easier to drag from track to track and insure that it appears on every stem.

Make sure this "2-pop" appears on EVERY stem, and if your music starts right at the beginning of the picture you've been given, is placed exactly two seconds before the first frame of image that you're working against. Alternatively the 2-pop appear two seconds before the downbeat of your music if your music doesn't come in until halfway through the picture you're working against.

It is also important to tell the client where, in timecode numbers, your audio files must be placed. One way to do this in Logic is like this: Let's say your music starts at 01.02.03.04 - create a Marker in Logic at the nearest "whole-second" point BEFORE the 2-pop, which is two seconds before the exact start of your music. In this example the 2-pop would be at 01.02.01.04 so you would create a Logic Marker at 01.02.01.00. Then create another Marker after the end of the music, leaving plenty of time for reverb tails and whatever else to decay fully to silence. Then enable Cycle in Logic, with the Left Cycle Range set to that first Marker at 01.02.01.00 and the Right Cycle Point set to that Marker past the end of the music. Now, you can Bounce your mix and each of your stems with Cycle enabled and they will all be exactly the same length - the length of that Cycle between the two Markers.

Naming the audio files you deliver in a clear and consistent manner is also absolutely crucial. Don't give them files with names like, "FinalMix_Drums_02" or garbage like that which will only make sense to you. I use a clear naming system that integrates the production title, the number of the season and episode, the cue number, cue title, version, stem label, and the SMPTE start point for the audio files into EVERY filename. So my audio files have name like, "LV307-4m22-AttackDog-v5-DRUMS=01.07.22.00" - this translates into:

LV = "Las Vegas" (the production name)

307 = season 3 episode 07

4m22 = Act 4, music, cue number 22

AttackDog = the actual "title" of the piece of music

v5 = version (aka revision) number 5 of this piece of music

DRUMS = the stem name. Sometimes I use things like "aDRUMS", "bGTRS", and "cKEYS" so that things alphabetize neatly. I use "zMIX" so that my reference mix (which is often not actually used by the client except for reference purposes) appears at the end of the list. You could also label the mix "aMIX" so that it appears at the TOP of the list if you want.

01.07.22.00 = this is the actual SMPTE start point for the audio files - the exact spot in their timeline where that audio file should be placed in order for everything to line up the way it did when I was working on the music.

With all of that info in each and every file name, there will be no confusion about what stem of what version of what piece of music they're looking at, and no confusion about where in time it should happen.

In truth, on many projects I work on we don't use 2-pops anymore since having the SMPTE frame number the audio files need to start at embedded in the file name is usually enough for them - this saves them the hassle of chopping out and muting all of those annoying beeps so they aren't heard in the final mix. On a project with lots of music cues that can save them some time. But when just starting out, or working with new people, I wouldn't skip the 2-pops unless you have a discussion with the client and confirm that just having the start point in the file names is enough for them.

You will have to use whatever method you prefer to insure that only the elements that belong in each stem are playing as you Bounce that stem - use Mute or Solo on the Aux Objects that are your stem sub-masters or use Region / Object Solo to have only the desired Regions in the Arrange Window playing as you do the bounce. It is possible to avoid the hassle of creating all of those Aux Objects for stem sub-masters, and the hassle of routing individual tracks to the appropriate Bus>Aux set, if you use Region Solo and do the bounce of each stem with just that stem's Regions playing - but this has two drawbacks: You must be very careful and precise about what objects you select in the Main (Arrange) Window before you invoke Region Solo to insure that you haven't accidentally allowed some sounds to occur on more than one stem (this is a big no-no), and if you apply any "mix bus" dynamics processing like compressor / limiters, etc. then the final mix of all your stems, when all combined, will sound a little bit different to what you were hearing when everything was just playing all at the same time through that compressor / limiter. Picture how a compressor will react when only the drums are playing, and then picture how it will react when the whole darn mix is hitting it. Depending on your music this might not be a huge problem, but this is the reason why the "right" way to do it is to go to the trouble of creating all those stem sub-masters in Logic's Environment or Mixer windows, and apply any "mix bus" dynamics processing individually on each stem, with no further processing on the actual "final mix".

Some folks (like me) use "Solo each stem and Bounce them one-at-a-time" as described above; some folks like to route each stem to separate audio outputs and print the results to a separate computer running Pro Tools (I also do this on bigger projects), and some like create a set of audio tracks within their Logic Project and route the stem sub-masters back to those tracks and then do a real-time audio record to print their cues. Each method has its advantages and disadvantages. It's up to you to decide which method best matches your gear, the complexity of your music / stem layout, and your comfort level for setting up complex audio routings in Logic.

TL;DR = Sure, Logic can do everything you need. I deliver film and television scores mixed inside Logic all the time.

Last edited by charlieclouser; 17th December 2018 at 11:52 PM..
Old 19th December 2018
  #3
Lives for gear
 
gsilbers's Avatar
 

nice ^^^^^
Old 3rd January 2019
  #4
Here for the gear
 
timemaster's Avatar
 

Wow Great info Charlie and very generous of your time. I would add that logic's Track Stack / Summing stack feature makes it 2 clicks to set up all the Aux/Bus routing necessary for stems...
Old 20th August 2019
  #5
Gear Nut
 

Charlie, I didn't thank you for this and for that I'm sorry. This is GREAT information and I would have never known anything about somehting like the "2 pop".

I'm hoping to start off kind of small and do more basic "synth" kind of music without the need for "sections" and probably averaging between 4 and not more than maybe 10 tracks (and only that high a count for single sounds that appear sporadically as effects probably).

But I see the necessity of learning how to do what you're talking about should I work my way into longer projects where there will be music at various places throughout the video.

I've been away a while so sorry again I didn't thank you earlier because this was a wonderful post and I hope you didn't feel like you gave me all that info and then insufferably I just ignored you or didn't respond that certainly wasn't my intent. I remember posting this, but then I think I just forgot, or got preoccupied with something else in life etc.

I'm back now learning a lot more and finding this response is just yet more stuff I realize I need to start investigating and that's great - it's overwhelming at times - and humbling - but at the same time it gives me the pointers I desperately need to start learning more on my own - otherwise I'd just be fumbling about not knowing what's of use or not.





Quote:
Originally Posted by charlieclouser View Post
Logic has all the capabilities you describe. To answer your questions in order:

- The client will probably deliver to you a Quicktime movie, encoded using h.264 or ProRes codecs, in a .mp4 "container" that presents to the MacOS as a Quicktime movie. This can be directly imported into Logic, and displayed either full-screen on a separate monitor, in a floating window, or in a small pane at the upper left of Logic's Main (Arrange) Window. You set the SMPTE frame rate of your Logic Project to match that of the movie; this can be shown in the "Get Info" about the movie file. For video, 29.97 is most common; for film it's often 23.976. These are set under File>Project Settings>Synchronization tab in Logic. In that same tab you adjust "Bar Position XXX Plays at SMPTE YYY" so that the start of your music lines up with the first frame of image - usually this should NOT be Bar 1, better to use Bar 5 or Bar 9 or something so that you have "dead air" before the start of picture. If needed, adjust settings under File>Project Settings>Movie so that only a portion of the movie plays within your Logic Project - this is usually only needed if you are working on a small section in the middle of the movie.

- Formats described in previous paragraph; yes, Logic can import these directly with no conversion needed.

- To deliver the music back to the client you will definitely give them an audio mix down, and possibly (probably / always) also give them pre-mixed stems (drums, bass, synths, strings, brass, winds, choirs, etc.). But you probably will not give them individual "raw" audio tracks. Some folks in the record-making world use the word "stems" to describe individual audio tracks in a mix, like kick, snare, tom1, tom2, etc. This terminology is not used the same way in producing music for picture. In that world, a "stem" is a pre-mixed "sub-mix" of a bunch of related elements that want to be dealt with as a single brick, like "all the low percussion", "all the high percussion", "all the various guitar layers", etc. Each of these stems should be mixed, eq'ed, compressed, effected, whatever... so that when all of the stems are added together with their faders at zero, the final mix is heard. That way they can replicate the mix you were hearing, but have control over each group of sounds to, for instance, lower the guitars for a few seconds while leaving the drums as is. Note that working this way means that you can't easily put "mix buss" compressors or mastering plugins on your final mix - and should avoid even trying. Instead, put any "mix buss" processing INDIVIDUALLY on each stems' sub-master. In Logic this means routing individual tracks to a Bus, then creating an Aux Object whose source is that Bus and whose output is your main outs, and repeating this process for each "stem". That way, each of those Bus-fed-Aux Objects becomes a "stem sub-master" and you can apply processing to each separately.

- The client will never use, or want to see, your actual Logic Project that contains all your individual audio tracks, MIDI / soft synth tracks, plugins, etc. This is for your in-house use only. They only want a flexible layout of stems. In all likelihood they will be using Pro Tools to assemble all of the audio files of your music, the dialog, the sound effects, etc., so the audio files you give them will be imported into a Pro Tools session somewhere else. You should briefly discuss with whoever will be mixing your music against the other dialog and sound effects about how many stems they want and how to group the instruments together to make their job easiest and give them the flexibility they need. Maybe give them a stereo rough mix to refer to during this discussion, so they can say things like, "I don't mind of that small amount of percussion is all grouped together, but that wind-chime sound should be on its own stem" and stuff like that. While having this discussion you should also confirm exactly what specs the audio files you will give them should have - they will probably want 24-bit, 48kHz WAV files but you should absolutely confirm these tech specs with the client.

Note that almost all music for picture work occurs at 48k sample rate and NOT 44k as you would use for making CDs.

- To sync up the stems that you deliver with the picture, the client will need to know exactly where those audio files need to be placed on a timeline. It is crucial that ALL of your stems have the same start point - if one stem doesn't have any sounds in it for ten minutes, and then has a tiny little chunk of wind-chimes, well... fine - that stem will have ten minutes of silence at the start. It is common practice to put a "two pop" in each stem's audio file - this is a one-frame-long "beep" at 1kHz, usually at -18db below full-scale. In Logic you can make this by using an empty instance of EXS-24 - the default program in EXS produces a sine wave. Insert a one-frame-long MIDI note of C6 on a track containing an empty EXS, adjust so that its level appears around -18db and you should be good. I prefer to bounce this little beep to an audio file which makes it a little easier to drag from track to track and insure that it appears on every stem.

Make sure this "2-pop" appears on EVERY stem, and if your music starts right at the beginning of the picture you've been given, is placed exactly two seconds before the first frame of image that you're working against. Alternatively the 2-pop appear two seconds before the downbeat of your music if your music doesn't come in until halfway through the picture you're working against.

It is also important to tell the client where, in timecode numbers, your audio files must be placed. One way to do this in Logic is like this: Let's say your music starts at 01.02.03.04 - create a Marker in Logic at the nearest "whole-second" point BEFORE the 2-pop, which is two seconds before the exact start of your music. In this example the 2-pop would be at 01.02.01.04 so you would create a Logic Marker at 01.02.01.00. Then create another Marker after the end of the music, leaving plenty of time for reverb tails and whatever else to decay fully to silence. Then enable Cycle in Logic, with the Left Cycle Range set to that first Marker at 01.02.01.00 and the Right Cycle Point set to that Marker past the end of the music. Now, you can Bounce your mix and each of your stems with Cycle enabled and they will all be exactly the same length - the length of that Cycle between the two Markers.

Naming the audio files you deliver in a clear and consistent manner is also absolutely crucial. Don't give them files with names like, "FinalMix_Drums_02" or garbage like that which will only make sense to you. I use a clear naming system that integrates the production title, the number of the season and episode, the cue number, cue title, version, stem label, and the SMPTE start point for the audio files into EVERY filename. So my audio files have name like, "LV307-4m22-AttackDog-v5-DRUMS=01.07.22.00" - this translates into:

LV = "Las Vegas" (the production name)

307 = season 3 episode 07

4m22 = Act 4, music, cue number 22

AttackDog = the actual "title" of the piece of music

v5 = version (aka revision) number 5 of this piece of music

DRUMS = the stem name. Sometimes I use things like "aDRUMS", "bGTRS", and "cKEYS" so that things alphabetize neatly. I use "zMIX" so that my reference mix (which is often not actually used by the client except for reference purposes) appears at the end of the list. You could also label the mix "aMIX" so that it appears at the TOP of the list if you want.

01.07.22.00 = this is the actual SMPTE start point for the audio files - the exact spot in their timeline where that audio file should be placed in order for everything to line up the way it did when I was working on the music.

With all of that info in each and every file name, there will be no confusion about what stem of what version of what piece of music they're looking at, and no confusion about where in time it should happen.

In truth, on many projects I work on we don't use 2-pops anymore since having the SMPTE frame number the audio files need to start at embedded in the file name is usually enough for them - this saves them the hassle of chopping out and muting all of those annoying beeps so they aren't heard in the final mix. On a project with lots of music cues that can save them some time. But when just starting out, or working with new people, I wouldn't skip the 2-pops unless you have a discussion with the client and confirm that just having the start point in the file names is enough for them.

You will have to use whatever method you prefer to insure that only the elements that belong in each stem are playing as you Bounce that stem - use Mute or Solo on the Aux Objects that are your stem sub-masters or use Region / Object Solo to have only the desired Regions in the Arrange Window playing as you do the bounce. It is possible to avoid the hassle of creating all of those Aux Objects for stem sub-masters, and the hassle of routing individual tracks to the appropriate Bus>Aux set, if you use Region Solo and do the bounce of each stem with just that stem's Regions playing - but this has two drawbacks: You must be very careful and precise about what objects you select in the Main (Arrange) Window before you invoke Region Solo to insure that you haven't accidentally allowed some sounds to occur on more than one stem (this is a big no-no), and if you apply any "mix bus" dynamics processing like compressor / limiters, etc. then the final mix of all your stems, when all combined, will sound a little bit different to what you were hearing when everything was just playing all at the same time through that compressor / limiter. Picture how a compressor will react when only the drums are playing, and then picture how it will react when the whole darn mix is hitting it. Depending on your music this might not be a huge problem, but this is the reason why the "right" way to do it is to go to the trouble of creating all those stem sub-masters in Logic's Environment or Mixer windows, and apply any "mix bus" dynamics processing individually on each stem, with no further processing on the actual "final mix".

Some folks (like me) use "Solo each stem and Bounce them one-at-a-time" as described above; some folks like to route each stem to separate audio outputs and print the results to a separate computer running Pro Tools (I also do this on bigger projects), and some like create a set of audio tracks within their Logic Project and route the stem sub-masters back to those tracks and then do a real-time audio record to print their cues. Each method has its advantages and disadvantages. It's up to you to decide which method best matches your gear, the complexity of your music / stem layout, and your comfort level for setting up complex audio routings in Logic.

TL;DR = Sure, Logic can do everything you need. I deliver film and television scores mixed inside Logic all the time.
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump