Originally Posted by Bill@WelcomeHome
This is impossible in any production scenario involving computers. Nothing moves without the clock ticking. The OS alone adds a delay, as does every branch of processing through which we send a signal. If it only comes in and comes back out, it at least goes through the soundcard, through the os to the app, through the OS again, through the sound card again. Add eq, or any other processing, you've added additional latency as you've made the trip through the app longer.
The A/D adds latency, as does the D/A.
As has been mentioned elsewhere, sound travels at roughly 1100 feet per second. Going by the OPs hypothesis, there is significant delay between a player and his amp 20 feet away, or between two players standing 10 feet apart. I don't find that delay to be problematical, though I am differing distances from and among the various people, instruments, and amps with whom I might be performing. Somehow we all make coherent music together. To me, 5ms is not significant. However, a direct monitoring signal with a following delayed matching signal from the DAW of 15-500 ms would be darned distracting.
Lower latencies in production situations results in less capabilities as the computer is pushing harder to try to handle realtime processing. Increase the latency, and you'll be able to do more.
Your first paragraph only makes my point. There is latency comming from alot of places when dealing with computers i.e asio i/o, converters, distance from monitors, delay compensation, etc So in a cumulative-latency-enviroment the logical thing to do is try to eliminate as much latency as you can so you can do more time-critical things while Mixing, etc
As for your last paragraph you state that raising your buffer-size/latency allows you to do more, this is true for non-realtime stuff but for people who need to track thru effects, headphone monitoring, playing instruments, etc raising buffers is not what's wanted. And for chaining effects or using single effects with large latencies the higher the buffer the less you can do these things REALtime. So the more you raise your buffer the more non-realtime things you can do but if you want to do both non-realtime and Realtime tasks together (together is more) then you'll need to figure out how to keep your buffer small and able to do all that you need aswell. There is alot more to music production then just non-time-critical processes such as Most of Mixing tasks, Mastering, etc And many of us simply just like to make music and do all stages together often.
You state that raising the buffer increases the power, this is true due to a fundamental flaw with Native world and in some peoples eyes like myself higher buffer means less powerful/capable. There is no buffer raising in Realtime dsp enviroments and when combined with Native i do all that i want without latency issues, compromises, etc.
I don't know why i have to constantly explain something that is so simple to understand "the less latency you have the more your able to accomplish", unless your just negatively biased and looking to bash and destroy an arguement cause it's a threat to your own beliefs. If argueing is what you want i'm not interested. I've made my points and will leave it to the readers to use common sense and decide for themselves whats truth. And leave it to the moderators aswell. I'm not going to go in circular arguements nor do i have the time so i'm finished here.