Mastering Workflow Question Regarding SRC and Dither
I am mastering a set of 12 songs that were transferred from master tape to 24-bit, 48Khz files. I am using Pro Tools for processing of the audio (compression, EQ, limiting) and Jam for authoring the Red Book CD master.
My question is in regards to the optimal place in the chain to perform the SRC from 48KHz to 44.1KHz. My understanding is that SRC should be performed PRIOR to dither. So, I have two options: convert all the files to 44.1Khz before starting work in PT (using either SampleManager on the files before importing, or PT's "Tweakhead" on import), OR work in PT at 48Khz and perform the SRC at the end of the process, just before truncating/dithering to 16-bit.
It seems from reading the posts on this forum, that of these two options, the latter case is better (i.e. do all of the processing at the higher sample rate and convert to 44.1Khz as the last step before dither). Is that correct?
If so, there are two ways to do this. Is one better than the other?
Method #1 - Process in PT, BTD each song @ 24/48Khz (i.e. no conversion performed in PT) and use SampleManager to batch process the SRC and Bit-Depth Conversion w/Dither of all files to 16bit/44.1Khz.
Method #2 - Process in PT, BTD each song @ 24/44.1Khz (SRC using Tweakhead). Reimport bounced/SRC'd files to new 24bit/44.1Khz session, insert dither plug-in on the master fader, and perform a second BTD, this time converting output to 16-bit/44.1Khz.
One other question would be, if using Method #1, when performing the BTD out of PT to 24/48, would 24-bit dither technically be necessary at that step (to account for the truncation happening after processing internally at 32-bit floating point), or is it overkill?
Any thoughts or opinions would be appreciated!