View Single Post
Old 16th April 2007
  #74
Gear addict
 

Quote:
Originally Posted by Downtown View Post
Dan,

I see you're viewing the thread here....any comments? Clarification? Your text is getting some heavy reading through this thread.
I see a lot of comments here.

First, it is true that the front end of a modern AD (called the modulator) is running at very high rates (64-1024 faster then 44.1KHz), but it does so for very few bits (typically 1-5 bits). The very fast but few bits data is then converted to the lower rates at many bits (a circuit called the decimator) which is the converter output data rate, thus the sample rate. So one should not confuse localized sample rates of a circuit with the rate of the data used for storage or transmission or DAW processing… the sample rate.

Regarding of the impact of sample rate on accuracy:

We all know that sampling way too slow (say at 1KHZ) is not a good idea. Why? It will only allow frequencies of 500Hz or lower (at best). Sampling that low for audio is simply ridicules.

But we also know that sampling audio at say 1GHz is ridicules. Why? Because at 1GHz we can only get very few bits of accuracy, so the noise and distortions will be really bad. The data size will be huge, and for no good reason at all (we are talking about audio, not 1GHz oscilloscope or telecome gear...).

Therefore, there is a concept we need to adopt – the concept of an OPTIMAL sample rate. Clearly it is above 500Hz, and bellow 1GH. So what is the optimum rate?

The optimum rate for conversion is driven by the application. For weighing scales, the process is slow (a second or more) but we need a lot of accuracy. For video, we need some few MHz bandwidth, but lucky for us, the eye is a lot less accurate then the ear, so having fewer bits (then audio) is OK. For medical applications, it depends on the specific cases (CAT scan, MRI and so on, all have their requirements)…

Generally, if you look at converters, you find out that the faster rates yield less accuracy, the slower rates yield higher accuracy. Why is it so? Well, this is not the only case where doing things faster is a tradeoff for accuracy, and taking your time often enables more precision.

Say you want to charge a cap. If you “take your time”, the final charge will be closer to the intended charging voltage. If you do not wait long enough, it will not charge fully. So you try and reduce the cap size, and now you have other problems show up (beyong the scope of this post)…

Say you want to have an OPamp track an input signal accurately. A circuit designer knows that the best accuracy is at low frequencies, down to DC, and at some point, going to higher frequencies, we lose accuracy. The devices (transistors) inside the amplifier “loose steam” at higher frequencies. Yes you can find real fast transistors, but then you trade off accuracy in different ways.

The charging cap and OPamp are just the first 2 “element” in the AD circuit. There are a lot of caps charging and a lot of OPamps inside an AD… It would take too long to get into the details why speed accuracy is always a tradeoff. Such tradeoffs do exist in electronics and other areas. In electronics one can trade off speed vs. power, size vs. temerature (heat) and so on. It is true that technology is moving forwards, so the tradeoffs today are different then those of say 10 years ago. Please take my comments about tradeoffs as correct for a given time in the history of technology. Lets not compare the speed of today’s gear with that of 40 years ago… Lets talk about the tradeoffs that exist today, or 10 years ago, or at any specific point in time.

For a while, I was interested in writing a whole paper about the technical reasons for speed accuracy tradeoff. But then, I needed to buy a new Audio Precision test system for the production final testing. Now, these guys there at Audio Precision are makers of the finest audio measurement gear, and we are not talking about inexpensive gear. They have a converter based system called ATS2, and you can buy it to accommodate unto a little over 100KHz (for 44-96KHz sampling) at very nice accuracy, or you can buy it with an option to extend to around 200KHz (for 192KHz gear). The point is: if you want to do 192KHz the measurement system is limited to 16 bits! When the best test gear maker has to cut down on precision significantly, for going from 96 to 192KHz, you know that there is speed accuracy tradeoff. Again, the gear I am talking about far from inexpensive. At that level, they do not trade off converter quality for lower price. So when I saw that “proof” that speed costs accuracy, I sort of lost interest in writing a more detailed description as to why speed and accuracy pull in different directions. I can now say
"I rest my case"...

We are talking about audio, so we need to cover the hearing range. What is it? It is a little higher then 20KHz (for some people), so the theory suggests that we can sample at a little higher then 40KHz (twice the bandwidth). But theory assumes that we can “do theoretical things”, in this case, we would need brick wall decimation filters, going from passing audio fully intact at say 20Khz, to blocking it completely at say 20.000001KHz. We can not do such things. We end up needing some margins to “bridge the gap” between theory and good practice.

In my view, that margin should be up to the design engineers. The ear should tells us what we can hear and where the limits are, and the designer gets to find how much margin is sufficient to accomodate the ear. Too often, mastering and recording people, or even gear salesmen, step over thier boundries into the design area, which is a sad fact responsible to the false notion that 192KHz sampling will be better, which is false.

Nowdays there are a number of good designers and ear people that find 60-70KHz sample rate to be the optimal rate for the ear. It is fast enough to include what we can hear, yet slow enough to do it pretty accurately. Faster rate means less accuracy, with some unwanted side effects – increased data size, need for more powerful DSP compute engine, and there is no up side to going faster. Going slower gets the designer “squeezed” at the 20KHz range, which we need to include for high quality audio.

It took a few years to have it turned around, but many of those that “jumped on the 192KHz band wagon baloney”, are coming around to saying that 60-70KHz is optimal. Well, there is no such standard, but 88.2-96KHz is not that far from the optimum. It is slightly faster then I would like, but still acceptable to me.

It is possible that 1-2KHz will be more accurate when using 44.1KHz sample rate. In fact, one can do a great job for 1-2KHz with an 5KHz sampling rate. But the converter designer can not “go there”. We usually need to look at the whole audio range. 44.1KHz can be a somewhat tight squeeze, especially when we keep “piling” attenuation on that 20KHz range – most mics have 3dB loss at around 20KHz, then there is the AD with 3dB at 20K, then the speaker, the processor… Pretty soon the accumulated impact is such that there is not much 20KHz left… Moving the sampling a little higher (be it 60Kh, 88.2 or 96K) takes some of the “offenders” out of the picture (all you need is a few KHz extra and the problem is gone). At my age (62) it makes less of a difference, but a young person with great ears can be impacted.

Regards
Dan Lavry
Lavry Engineering