Originally Posted by jazzymike
So all things being equal, will the highest clocked CPU deliver the lowest latency?
The most important variable is the workload
A workload of 96kHz is different than 44.1kHz.
A workload of 100 simulataneous tracks is different from 10 tracks.
A computation workload of adjusting the gain by 1 dB is different than the computations of a convolution reverb.
Basically, the workload is the total amount of data and what processing you're doing to that data.
If the total workload
is "light" enough, it can be handled by all cpu speeds you mentioned, both desktop and mobile to deliver the same latency. The lowest DAW buffer you can set is 32 samples. Because of this minimum limit, this negates
any speed advantages of faster processors if
the slower processors can get all their computations done within
that 32 sample window. Therefore, a future 10.0TeraHertz supercomputer with liquid nitrogen cooling towers could
process raw data faster BUT that speed is rendered irrelevant by the 32 sample buffer.
If a cpu is clocked at 2.0GHz, that's saying it cycles its state 2 billion times per second. To simplify discussions (leaving out subtleties of cpu architectures and memory fetches), let's say it executes 1 instruction per cycle
. When you set the buffer to 32 samples, another way of thinking about it as that you're telling the DAW to limit its ongoing processing window to 32/44100th of time (0.7 ms) which is enough to complete about 1.4 million cpu instructions. (2,000,000,000 * 32 / 44100 == 1,451,247).
Is a 0.7 millisecond window of 1.4 million instructions enough to calculate 1 dB of gain added to 2 tracks? Yes. Is it enough cpu breathing room to add convolution reverb to 100 tracks? No. You have to adjust the DAW to a higher buffer (give it a bigger window of instructions -- say 2.8 million cpu instructions, or more) ....or... get a faster computer that can complete more instructions in the same 0.7ms of time.