An a/d samples an analog (voltage) signal and outputs a digital signal that is 24bit unsigned. The range 2^24 bits accomodates the analog signal's amplitude range. The (known) quatization errors that occur during this conversion will be propagated to any future digital processing of that signal.

What happens when you convert that 24 bit *signal* to a 32 or 64 bit floating point *number* in a DAW?

One would think that floating point is as precise, or better, (a holder of data) as 24bit unsigned because it's mantissa alone has 24 bits,

thus yielding the same precision

as 24bit plus the extra "headroom" of the 7bit exponent. But, in reality the distribution of values in floating point

is such that as a signal gets larger there are less possible combinations (less precision) to represent it. The problem affects *mostly* operations like adding two numbers where one is much larger than the other. The effects, though, are devastating and that's one of the reason many professionals prefer the otb sound signature. Check out this link:

http://www.cs.princeton.edu/introcs/91float/
and this link:

http://docs.python.org/tut/node16.html
and this link for more enlightment:

http://en.wikipedia.org/wiki/Fraction_%28mathematics%29
The combination of inaccuracy and imprecision in floating point dsp computations is what leads to what I would call "digital harmonics". These digital harmonics result in a sound that is unpleasing to the ear, in the same way and order of magnitude as a soundwave that is reproduced as an electric signal. REPRODUCTION

It would be interesting to see what are the pitfalls of fixed point systems aka pt. Meanwhile I'll take the analog route and try to figure out a way to make those fractions work in the daw domain. Hey, wordlengths are limited as is, we don't have to mess up the arithmetic