The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
Audio @ 1 bit 5.6 MHz on Blue Ray could save the day... Recorders, Players & Tape Machines
Old 20th March 2011
  #121
Lives for gear
 
bcgood's Avatar
 

Verified Member
Quote:
Originally Posted by Laarsø View Post
But the sound of analog tape is for me where it's at. Real drum Dub at 30 ips AES, on 1/2" 2-track with no NR and no penthouse using 456 or ATR, depending.




Cheersø,
Laarsø
Yea I agree, analog tape is where it's at, most def.
Old 20th March 2011
  #122
Lives for gear
 
wado1942's Avatar
 

I'm a GP9 guy myself. I'll probably have to switch to ATR when I use up the last of my GP9 though.
Old 23rd March 2011
  #123
Lives for gear
 
cdog's Avatar
Problems people in this forum should be concerned with today

1. Loudness Wars
2. Poor quality MP3s, people listening to music on youtube
3. Poor quality listening devices (earbuds, computer speakers)
4. Piracy

Problems people in this forum should not be concerned about

1. A new audio format, or super high bitrate/samplerate formats.

The 16/44.1 standard isnt the problem. Its a fine standard.
Old 23rd March 2011
  #124
Lives for gear
 
wado1942's Avatar
 

Quote:
Originally Posted by editronmaximon View Post
Hmm, maybe in some sort of infinite bit depth fixed point format. Definitely not in floating point. Adding signals together can have some real problems in float, aiui.
You don't need infinite resolution for summing to be transparent. Summing audio signal digitally is plain arithmetic and nothing more. 3 + 16 +347 + 5 = 371. That's digital audio summing, no errors there. Where you run into trouble is making level adjustments so that everything can be summed without clipping. I've done some experiments on this.

Here's a recorded sine wave, no processing.




Here's the same wave after being gain changed by .2dB on cheap 32-bit float software.



The same wave is shown again, having its gain changed by .2dB software that cost 10x as much, still 32-bit float.



Once again, the same process with well-designed software costing about half way between the first two, also 32-bit float. Note that this one was flat dithered internally while the others had no dither option.



I can see a potential from summing many, many tracks in 32-bit float so I'll have to do an experiment on that as well. Still though, if the software is well designed, the error from mixing sixteen tracks together should be pretty low compared to the kind of distortion you might get from digital compression, where the gain is constantly changing (not to mention artificial harmonics adding aliasing). Then there's EQ, where hundreds of versions of the same signal can be level adjusted and summed together.

You've sparked some curiosity in me now so I'll try and experiment with summing in the floating point world.
Old 23rd March 2011
  #125
Lives for gear
 
wado1942's Avatar
 

Quote:
Originally Posted by editronmaximon View Post
Hmm.

But those graphs are of simple gain changes, and even there one sees issues and observable inconsistency from software to software.

I thought you were talking about summing signals together.

I think one of the main problems they always talk about with floating point is with combining signals, particularly where one is high amplitude and the other is low amplitude. The precision suffers significantly.

Its established that the 32-bit float implemented in desktop computers gives up precision in favor of dynamic range.

Its just weird, from what I can tell. Like certain numbers can't exist, and other weird stuff. There's a bunch of stuff written about it. I try to locate and post some later, if you're interested.

From what I understand, actual 64-bit floating point is way better, but I guess "they" are running into performance problems getting it going on personal computers, or something. The 32 bit is easy and fast apparently. But accurate, ...... ehm, not so much.

Right, I AM talking about summing. Simple addition is lossless, period. What I'm saying is the only place where distortion can occur in a simple summing operation is the adjustment of levels that take place BEFORE summing.

Now the advantage of floating point IS increased resolution over fixed-point. Now 32-bit fixed will have higher resolution at high amplitude than 32-bit float, but in the floating point world, a -144dB signal has about the same resolution as a full scale signal. That doesn't sound important until you start performing operations that require lots of multi-level signals coming together, like in an EQ. You can also have overshoots in floating point math without distortion, which is also important for processing, which can have wildly higher peaks internally than what you'd get upon the output. I remember one the the guys involved in the first manufactured digital reverb say that they added an extra four bits of HEADROOM, not foot room for the processing to avoid internal overshoots.

Any way, your comment about simple summing the floating point world causing distortion comes as a shock to me so like I said, I'll have to look into that in greater detail.
Old 23rd March 2011
  #126
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
I think one of the main problems they always talk about with floating point is with combining signals, particularly where one is high amplitude and the other is low amplitude. The precision suffers significantly.
It rather depends on what you consider significant.

When you add two floating point numbers, the smaller one is first shifted to give it the same exponent as the larger one, which means you lose any bits off the bottom (the number gets rounded).

But you still have a 24 bit mantissa, so the worst case scenario is that what comes out is the equivalent in accuracy terms of adding two 24 bit fixed point values together.
Old 23rd March 2011
  #127
Lives for gear
 

Edit: retracted while I refresh on my sig figs.
Old 24th March 2011
  #128
Lives for gear
 

Quote:
Originally Posted by wado1942 View Post
You don't need infinite resolution for summing to be transparent. Summing audio signal digitally is plain arithmetic and nothing more. 3 + 16 +347 + 5 = 371. That's digital audio summing, no errors there. Where you run into trouble is making level adjustments so that everything can be summed without clipping. I've done some experiments on this.

Here's a recorded sine wave, no processing.




Here's the same wave after being gain changed by .2dB on cheap 32-bit float software.
.
If you look at the dB scale on the right you notice that these artefacts are at about -140 dB level. For many reasons NOBODY could never hear them in a musical signal. There is no system to reproduce them, with 160 dB dynamic range, there is no human being who could listen to a 150 dB SPL signal and hear something near the threshold of hearing at the same time.

Academical, all this. There surely are really are a lots of things to worry about, but this is not one of them.
Old 24th March 2011
  #129
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
That's truncation, not rounding.
That all depends on how your hardware implements it. A simple shift is a truncation, but floating point hardware typically uses a wider adder and rounds the result off after addition
Quote:
That's incorrect and misleading. You do not have to drop any bits from the addends before summing them in fixed point. As shown in what I cited above, simply adding or subtracting two numbers in floating point can result in significant inaccuracy, and even some rather bizarre results.
No it is not incorrect, you're just inserting statements I didn't make.
I didn't say you had to shift fixed point numbers before adding them, I said that the worst case scenario with adding two floating point numbers was a result with the precision of adding two fixed point numbers with the same mantissa.
You don't need to shift the fixed point numbers and lose precision, they've already lost the precision, a 24 bit fixed point representation is less accurate than a 32 bit floating point one.
Quote:
I think you are trying to defend 32-bit floating point because your job entails writing software programs for 32-bit floating point systems.
I generally use floats when they're available (not always though, there can still be speed advantages with integers), and fixed poinit when working with fixed point DSPs.

I can work in both, technically speaking it's not a problem, you see I'm actually good at what I do and know my subject.

What I'm actually trying to do is educate people, including you, I don't have to, it doesn't gain me anything financially and eats into my time that does, so your logic is as faulty as your knowledge.
Quote:
I don't have a dog in the race. I have, and use, both types of software. I am just looking at it objectively.
And without any real understanding of the subject.
Old 24th March 2011
  #130
Lives for gear
 
wado1942's Avatar
 

OK, there's no validity to anything I said.
Old 24th March 2011
  #131
Gear Addict
 

Quote:
Originally Posted by Jon Hodgson View Post
That all depends on how your hardware implements it. A simple shift is a truncation, but floating point hardware typically uses a wider adder and rounds the result off after addition
That's not what you say in your earlier incorrect statement.

Quote:
Originally Posted by jonhodgson
No it is not incorrect, you're just inserting statements I didn't make.
I didn't say you had to shift fixed point numbers before adding them, I said that the worst case scenario with adding two floating point numbers was a result with the precision of adding two fixed point numbers with the same mantissa.
You don't need to shift the fixed point numbers and lose precision, they've already lost the precision, a 24 bit fixed point representation is less accurate than a 32 bit floating point one.
That's not true, unless you are perhaps limiting your analysis to certain cases where it may be.

In particular, there are numbers that can be represented in fixed point that cannot be represented at all in floating point.

Quote:
Originally Posted by jonhodgson
I generally use floats when they're available, and fixed poinit when working with fixed point DSPs.

I can work in both, technically speaking it's not a problem, you see I'm actually good at what I do and know my subject.
As it turns out, writing dsp code does not require a complete understanding. You have tools to work with where others have done the deep work.

Quote:
Originally Posted by jonhodgson
What I'm actually trying to do is educate people, including you, I don't have to, it doesn't gain me anything financially and eats into my time that does, so your logic is as faulty as your knowledge.
That's pretty funny.

Quote:
Originally Posted by jonhodgson
And without any real understanding of the subject.
I have enough understanding to know that you don't.

One clue for others may be the fact that your homebrew theories are often in complete opposition to what respected authorities say. Too much spin and agenda from you.

I've been at this for a long time, and I know how to discern reliable information.
Old 24th March 2011
  #132
Gear Addict
 

Quote:
Originally Posted by wado1942 View Post
OK, there's no validity to anything I said.
O.K., if you say so.

But that's not what I wrote.

I'll post some more info. later maybe.


Basically, the explosion of 32-bit floating point daws and such is a result of companies' desire to be able to sell a cheaper product than the pro stuff. They saw that people were loving the whole bits and bytes thing with non-linear editing capabilities, like sonic solutions and pro tools tdm, and they wanted to tap into the much larger low-end market, to make more money.

It has given a lot of people tools to work with, and that's cool, but it has also spawned a lot of really lousy engineering practices. The floating point dynamic range makes it kind of idiot/user proof as long as the signal is in the daw. They sold that as "better".
Old 24th March 2011
  #133
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
The purported "advantages" of 32-bit floating point are essentially nonexistent, and irrelevant when it comes to audio. It is used because it is easy and cheap to implement and use in "native systems"n where the host computer uses it. But there is a difference between a word processing program and a daw.

I don't know of any high-end hardware dsp, or any dedicated dsp daw, where they have chosen 32-bit float for processing. Even the older Weiss stuff used a minimum of 40 bit double precision, iirc.
Actually you have it back to front, if you look at when these hardware based DAWs came into existence, you'll see that COST was the issue with floating point. Fixed point DSPs are much simpler and therefore cheaper, for a long time the 56k range gave the best bang for your buck in terms of resolution, speed and cost, so that's what got used.

These days the floating point SHARC has far bypassed it, and what's getting used in new designs that aren't constrained by legacy code.
Quote:

click the following link for James Moorer's [sonic solutions] paper:

Moorer paper
He's comparing 48 bit fixed with 32 bit float, which gives the float a 24 bit mantissa, which makes it useless as a generic float versus fixed comparison.

As I said before, pretty much the worst case scenario is that float performs as well as fixed point with the same size as the mantissa, so 32 bit float will be give results as accurate or better than 24 bit fixed.
Old 24th March 2011
  #134
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
That's not what you say in your earlier incorrect statement.
My statement was incomplete in its description. I guess I thought that having dealt with these sorts of number for nearly three decades I didn't need to expect someone who's read a couple of things on the web trying to find fault.
Quote:
That's not true, unless you are perhaps limiting your analysis to certain cases where it may be.

In particular, there are numbers that can be represented in fixed point that cannot be represented at all in floating point.
Not if the mantissa is of the same length as the fixed point representation.
It's easy enough to show, if we use fixed and floating point decimal (the principle is exactly the same, but the numbers make more sense to us as humans).

If we have fixed point with two digits of precision, it can represent any two digit value...

10
56
23
11

etc.

Floating point with the same precision of mantissa can also represent any of those numbers, it can also represent shifted versions of them

so not only can it perfectly represent
10, 56, 23, 11 etc,
but it can also perfectly represent
100000, 5.6, 0.0023, etc.

Now of course we need to store that shift, which is the exponent, so let's say we use one digit for that, giving us an effective exponent of -5 to +4

So the floating point representation takes up three digits of space, versus the fixed point one which is using two... so what if we make the fixed point one three digits?

Well then for a range of numbers the fixed point has the advantage, it can represent values between 100 and 1000 more accurately

123
456
874

etc, the best the equivalent size of float could do is
120
460
870

But anything below 10 it represents less accuratly, and it can't represent anything over 999 at all, so it's more accurate in some cases, and worse in others.
Quote:
As it turns out, writing dsp code does not require a complete understanding. You have tools to work with where others have done the deep work.
You haven't the first clue about what I do, so don't pretend you do.

I program at the "deep" level, and in the past I've jumped through hoops and done various mathematical tricks to extract the last bit of precision out of a fixed point algorithm (hand coded in assembler) in order to give maximum quality on something with very limited processing power.

So stick to **** you know something about, this isn't it.
Quote:
That's pretty funny.
Yeah, hilarious, that I should consider it even vaguely possible to educate you.
Quote:
I have enough understanding to know that you don't.
Q.E.D.
Quote:
One clue for others may be the fact that your homebrew theories are often in complete opposition to what respected authorities say. Too much spin and agenda from you.
Actually generally speaking they're not, just contrary to your limited understanding of what the "respected authorities" say.
Quote:
I've been at this for a long time, and I know how to discern reliable information.
Nope, sorry, that's a fail on that one.
It's not enough for the source to be reliable, and you're not even good at that, you also need to understand the information provided, and you regularly get that wrong.

But there are people here who can benefit, even if you're too obtuse to, so I'll keep posting to help them... and if that means I correct you, then so be it.
Old 24th March 2011
  #135
Gear Addict
 

Quote:
Originally Posted by Jon Hodgson View Post
But anything below 10 it represents less accuratly, and it can't represent anything over 999 at all, so it's more accurate in some cases, and worse in others.
That's all less.

Within the relevant range for audio, fixed point is better, hands down.

We don't need to represent 1000dB, or .0000000000000000000000000000000001dB.

Quote:
You haven't the first clue about what I do, so don't pretend you do.
I'm not pretending. You just get really hostile and defensive when you're shown to be wrong.

Quote:
Originally Posted by jonhodgson

and in the past I've jumped through hoops.
Entertaining, I'm sure, but irrelevant.

Quote:
Originally Posted by jonhodgson
and if that means I correct you,
It doesn't.
Old 24th March 2011
  #136
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
That's all less.

Within the relevant range for audio, fixed point is better, hands down.
"fixed point" can mean anything from 1 bit to 1000 bits, the same with floating point (well you'd need at least 2 bits), so what resolution of each are you comparing?

32 bit float is more accurate than 24 bit fixed, 32 bit fixed can be made more accurate than 32 bit float, though it can be a real pain to do so.

Neither format is better "hands down", if you'd ever developed any DSP, in different numerical formats, you'd know that.
Quote:
I'm not pretending. You just get really hostile and defensive when you're shown to be wrong.
I'm not the one making offensive innuendo about people's agendas.

You made statements about what knowledge is necessary to program DSP, and certainly implied the level of knowledge I needed and the level at which I developed, you were wrong.

Quote:
Entertaining, I'm sure, but irrelevant.
Ah yes, the traditional editron disingenuous incomplete quote for the purposes of insults and misdirection.

Since it was part of a statement talking about my need to manipulate these number formats and the algorithms at the lowest level, it was very relevant to the point under discussion.
Old 24th March 2011
  #137
Gear Addict
 

Quote:
Originally Posted by Jon Hodgson View Post

[snip]
These days the floating point SHARC has far bypassed it, and what's getting used in new designs that aren't constrained by legacy code.
That is simply argumentative, and has orders of magnitude of irrelevance to what is implemented in low-cost host-based audio programs.

Quote:
Originally Posted by johhodgson
He's comparing 48 bit fixed with 32 bit float,
Right, because that is a useful and relevant comparison and analysis, unlike what you post.

24-bit fixed point daws [e.g. Pro Tools HD] do calculations in 48-bit double precision [56-bit accumulator], other than storage to disk and some transmission on the TDM bus, with tons of extra room / bit width.

Quote:
Originally Posted by jonhodgson
As I said before, pretty much the worst case scenario is that float performs as well as fixed point with the same size as the mantissa, so 32 bit float will be [sic] give results as accurate or better than 24 bit fixed.
And as I, and just about everyone else has stated, that's not so.
Old 24th March 2011
  #138
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
24-bit fixed point daws [e.g. Pro Tools HD] do calculations in 48-bit double precision [56-bit accumulator], other than storage to disk and some transmission on the TDM bus, with tons of extra room / bit width.
Pro Tools did not start using 48 bit processing till the HD update, before then it, and other systems like Soundscape, worked at 24 bit fixed point. This wasn't a quality decision, it was a performance one, 48 bit fixed or 32 bit float (both superior to 24 bit fixed) would have cost too much.

Digi went down the motorolla 56k route because it was the only financially viable option which offered acceptable quality at the time (running 24 bit fixed point), when they had more power available due to newer DSPs in the same family they had no choice but to go to 48 bit fixed point to improve quality.. float was not an option.

In their native systems they've gone float, they could have gone 64 bit fixed.
Old 24th March 2011
  #139
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
And as I, and just about everyone else has stated, that's not so.
You keep comparing different things.

48 bit fixed calculations more accurate than 32 bit float? yes, they can be, though not always, for a start you have to scale things internally accordingly to get the maximum accuracy, in effect you have to program something that floating point does for you automatically.

24 bit fixed point calculations more accurate than 32 bit floating point? No, never

24 bit fixed point representation more accurate than 32 bit floating point? No, (well not for individual samples, on the other hand it is possible to argue that for a signal a properly dithered fixed point representation is more accurate than a non dithered float representation).

64 bit floating point more accurate than 48bit fixed? Yes.

You see you can't simply say that fixed is better than float, you need to be specific about what you're comparing, and where... and even then there isn't really a simple answer.
Old 24th March 2011
  #140
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
24-bit fixed point daws [e.g. Pro Tools HD] do calculations in 48-bit double precision [56-bit accumulator], other than storage to disk and some transmission on the TDM bus, with tons of extra room / bit width.
You're very trusting.

The DSP in a PT HD system only has a 24 bit multiplier, which means that 48 bit multiplications are a big hit in both performance and development time, you really think they're using them everywhere?

Not just in the mixing engines, but also in effects, including third party ones? Digi may have gone the whole hog (I doubt it though), but other companies will make their decisions based on their own judgements of required accuracy and performance criteria.

Dream on.

I would say there's a lot of straight ahead 24 bit fixed point (yes with 56 bit accumulator) going on in a typical PT HD session.
Old 25th March 2011
  #141
Gear Addict
 

Attached is the Rane Fixed vs. Floating Note:
Attached Files
Old 25th March 2011
  #142
Gear Addict
 

Relevant Excerpts From Rand Fixed vs Float Note

"Since fixed-point is the most picked on, let's begin with how
it can be superior to floating-point. Here is the executive summary
of why fixed-point DSPs can make for superior audio:
1. Less dynamic range (yes, in DSPs used for audio, this can be
a feature).
2. Double-precision capable 48-bit processing.
3. Programming flexibility that can guarantee proper behavior
under the adverse conditions presented by audio signals
— truly one of nature’s oddest phenomena.
4. Lower-power consumption (floating point hardware is more
complicated than fixed point; more transistors require more
watts).


The defining difference is that the fixedpoint
implementation offers double precision, while the floatingpoint
device features increased dynamic range. In floating-point
processors scaling the data increases dynamic range, but scaling
does not improve precision, and in fact degrades performance for
audio applications (more on this later). And it turns out that the
strength of the fixed-point approach is the weakness of the floating-
point, giving fixed-point a double advantage.

The benefit is most obvious in low frequency audio processing.
This is important since most of the energy in audio lies in
the low-frequency bands (music and speech have an approximate
1/f spectrum characteristic, i.e., each doubling of frequency results
in a halving of amplitude). The simple truth is that the floatingpoint
technique struggles with large amplitude, low-frequency
computations.
In fact, building a high-Q, low frequency digital
filter is difficult no matter what method you use, but all things
considered fixed-point double-precision is superior to floatingpoint
single-precision.



As mentioned earlier, both DSP designs have a 24-bit processor
for the mainstream functions. The fixed-point technique
adds double precision giving it 48-bit processing power, while
the floating-point design adds an 8-bit exponent. The 8-bit
exponent gives the floating-point architecture an astonishing
dynamic range spec of 1500 dB (8-bits = 256, and 2256 equals approximately
1500 dB) which is used to manipulate an operating
window, within which its 24-bit brain operates. Floating-point
processors automatically scale the data to keep it within optimum
range. This is where the trouble lies and this is why fixedpoint
is better than floating-point for audio. It is not that the
dynamic range is the problem so much as the automatic scaling
over a 1500 dB range that is the problem.
Fixed-point, with its
48-bits, gives you 288 dB of dynamic range – enough for superior
audio – but the programmer has to scale the data carefully.
Floating-point programmers leave it up to the chip, but, unless
they are careful, that creates serious errors and noise artifacts. All
the jumping about done by the continuous signal boosting and
attenuating can produce annoying noise pumping.



What’s bad about a floating-point processor with a dynamic
range of 1500 dB is that it scales its processing range based on
the amplitude of the signal its dealing with, but when dealing
with signals of differing amplitudes (i.e., real audio), the scaling
may not be optimized for the mixed result.
When dealing with
audio signals the installer cannot simply ignore the subtleties
of system setup because they have a floating-point processor in
their box.
Consider the typical audio mixer scenario: At any given moment
a mixer can have multiple levels present at its many input
ports. Input-1 might have a high-level sample to deal with while
Input-2 has a very low level, Input-3 somewhere in the middle
of its upper and lower limits and so on. A 32-bit floating-point
DSP chip makes a determination about the appropriate window
within which to work on a sample-by-sample basis but finally
represents its calculations in the same 24-bit manner as its fixedpoint
counterpart. Even in a simple two-channel stereo processor
signal levels between channels, while similar in average level, can
be vastly different instantaneously due to phase differences.
Nothing is gained by using a floating-point device in an audio
application but much may be lost. It does not have the 48-bit
double precision capability of a fixed-point solution, and noisy
artifacts may be added.



While floating-point DSPs give the flexibility to misadjust the system (too
much internal gain) without noticeable internal clipping, they
still suffer the unintended consequences of the misalignment

(say, in trying to mix two channels of very different audio levels)
that floating-point processors cannot fix. They merely mask the
problem from the installer’s view. Or, worse, produce audible
and annoying rise in quantization noise when filters are used
below 100 Hz. In this sense the fixed-point processors force the
installer to maintain the 144 dB processing window by avoiding
internal clipping through proper gain structure/setup and
so make maintaining overall quality easier than floating-proper
processor based boxes.


Double Precision
The double precision 48-bit processing is used when long
time constants are required. This occurs when low frequency
filters are on the job and when compressors, expanders and limiters
are used with their relatively slow attack and release times. If
24 bits are all that are available when more precision is required,
the results are a problem. The function misbehaves and the least
damaging result is poor sound quality. The worst result is amplifier
or loudspeaker damage due to a misbehaving DSP crossover,
making double precision a must-have for superior audio.


Examples and Counterexamples

Floating-point evangelists like to use an example where the
processor is set up for 60 dB attenuation on the input and 60 dB
make-up gain on the output. Leaving aside the absurdity of this
fabricated example, let’s use it to make our fixed-point-is-better
point: add a second input to this example, with the gain set for
unity, a 0 dBu signal coming in, and configure the processor to
sum both these channels into the output and listen to the results
— you will not like what you hear.

Another revealing example is how you never hear floatingpoint
advocates talk about low-frequency/high-Q filter behavior.
The next time you get the opportunity, set up a floating-point
box parametric filter for use as a notch filter with a center
frequency of 50 Hz and a Q of 20. First listen to the increase in
output noise. Now run an input sweep from 20 Hz to 100 Hz
and listen to all the unappetizing sounds that result. Audio filters
below about 100 Hz require simultaneous processing of large
numbers and small numbers — something fixed-point DSPs do
much better than their floating-point cousins.



Free the Developers

The real determinant of quality in audio DSP is the skill
of the programmers. They must devise accurate and efficient
algorithms; the better their understanding of the (sometimesarcane)
math, the better the algorithm; the better the algorithm,
the better the results. Fixed-point processing delivers a load of
responsibility to the hands of the developer. It also delivers an
equal amount of flexibility. A talented engineer with a good
grasp of exactly what is required of a DSP product can fashion
every detail of a given function down to the last bit. This is not
so with floating-point designs. They offer an ease of programming
that is seductive, making them popular when engineering
talent is limited, but not the best choice.
On one hand it is easier
to program but on the other hand it is less controlled as to the
final results — and, as we all know, that is what is important.



What is required in a floating-point DSP to achieve superior
audio? Here are some pretty nasty “ifs” necessary for floating-
point to overtake fixed-point:
if it is a 56-bit floating-point
processor
(i.e., 48-bit mantissa plus 8-bit exponent) or 32-bit
with double-precision (requiring a large accumulator), if the
parts run at the same speed as the equivalent fixed-point part,
if they use the same power, and if they cost the same, then the
choice is made.


Let’s Be Precise About This …

An example is the best way to explain how you lose precision
when floating-point processors scale data. Assume you have
two mythical 3-digit radix-10 (i.e., decimal) processors. One is
“fixed-point”, and one is “floating-point.” For simplicity, this
example uses only positive whole numbers. (On real fixed- or
floating-point processors, the numbers are usually scaled to be
between 0 and 1.)
The largest number represented in single precision on the
fixed-point processor is 999. Calculations that produce numbers
larger than 999 require double precision. This allows numbers
up to 999999.
Let the floating-point processor use 2 digits for the exponent,
making it a 5-digit processor. This means it has a dynamic range
of 0 to 999 x 1099 = HUGE number. To see how this sometimes
is a problem, begin with the exponent = 0. This allows the
floating-point processor only to represent numbers up to 999
– same as the fixed-point single-precision design. Calculations
that produce numbers larger than 999 require increasing the
exponent from 0 to 1. This allows numbers up to 9990. However,
notice that the smallest number (greater than zero) that
can be represented is 1 x 101 = 10, meaning numbers between
1-9 cannot be represented (nor 11-19, 21-29, 31-39, etc.). Increasing
the exponent to 3 only makes matters worst, but you can
cover (almost) the same range as the fixed point processor (up
to 999000); however the smallest number now represented is
1 x 103 = 1000, meaning numbers between 1 and 999 cannot be
represented. And the next increment is 2 x 103 = 2000, meaning
the represented number jumps from 1000 to 2000. So that now
numbers between 1001 to 1999 cannot be represented. With
exponent = 3, each increment in the mantissa of 1 results in an
increase in the number of 1000, and another 999 values that
cannot be represented.

Is this as big a problem as it first appears – well, yes and
no. At first it looks like the floating-point processor has lost the
ability to represent small numbers for the entire calculation’s
time, but the scaling happens on a per-sample basis. The loss of
precision only occurs for the individual samples with magnitude
greater than 999. Now you might think that everything is
fine, because the number is big and it does not need the values
around zero. But a few wrinkles cause trouble. When calculations
involve large and small numbers at the same time, the loss
of precision affects the small number and the result. This is
especially important in low-frequency filters or other calculations
with long time constants. Another wrinkle is that this happens
automatically and beyond the control of the programmer.
If the
programmer does not employ the right amount of foresight, it
could happen at a bad time with audible results.
In the fixed-point case, the programmer must explicitly
change to double precision – there is nothing automatic about it.
The programmer changes to double precision at the start of the
program section requiring it and stays there till the work is done.


The Big and the Small

Over and over in audio DSP processing you run into the
same simple arithmetic repeated over and over: multiply one
number by another number and add the result to a third number.
Often the result of this multiply-and-add is the starting
point for the next calculation, so it forms a running total, or an
accumulation, of all the results over time. Naturally enough,
adding the next sample to the previous result is called an “accumulate”
and it follows that a multiply followed by an accumulate
is called a MAC. MAC’s are the most common of all operations
performed in audio DSP, and DSP processors typically have
special hardware that performs a MAC very, very quickly.
As results accumulate, errors also accumulate. As well, the
total can get large compared to the next sample. To show this in
action return to the mythical 3-digit processors. Say we have the
series of numbers shown in the row labeled “Samples” in Table
1; a strange looking set of numbers, perhaps, but it represents
the first part of a simple sine wave. Multiply the first number by
a small constant (say, 0.9) and add the result to the second number:
0 x 0.9 + 799 = 799. Multiply this result by 0.9 and add it to
the third number: 799 x 0.9 + 1589 = 2308. And again: 2308 x
0.9 + 2364 = 4441. Continue this pattern and it forms a simple
digital filter. The results using double precision fixed-point are
shown in the row labeled “Fixed-Point Results” in Table 1.

Sample# Samples Fixed-Point Results Floating-Point Results
1 0 0 0
2 799 799 799
3 1589 2308 2290
4 2364 4441 4420
5 3115 7112 7080
6 3835 10236 10200
7 4517 13729 13600
8 5154 17510 17300
9 5739 21498 21200

Table 1: Results Between Floating- and Fixed-Point Accumulation

What about the floating-point processor? Start with exponent
= 0. The results are: 0, 799, … the next number is too big,
so increase the exponent to 1 … 2290, 4420, etc. Notice that
the floating-point values are smaller than they should be because
the limited precision forces the last one or two digits to be 0. It’s
easy to see that each result has an error, and the errors are carried
forward and accumulate in the results. Algorithms with long
time constants, such as low frequency filters, are especially prone
to these errors.
You’ll also notice that the accumulated values are getting
larger than the input samples. The long time constant in low
frequency filters means that the accumulation happens over a
longer time and the accumulated value stays large for a longer
time. Whenever the input signal is near zero (at least once every
cycle in a typical audio signal) the samples can be small enough
that they are lost; because the accumulated value is large, the
samples fall entirely outside the precision range of the floating
point processor and are interpreted as zero. The double precision
available in the fixed point processor helps the programmer to
avoid these problems."
Old 25th March 2011
  #143
Gear Addict
 

Quote:
Originally Posted by Jon Hodgson View Post

You're very trusting.
Now I know for sure that you're not paying attention.

Quote:
Originally Posted by jonhodgson
The DSP in a PT HD system only has a 24 bit multiplier, which means that 48 bit multiplications are a big hit in both performance and development time, you really think they're using them everywhere?

Not just in the mixing engines, but also in effects, including third party ones? Digi may have gone the whole hog (I doubt it though), but other companies will make their decisions based on their own judgements of required accuracy and performance criteria.

Dream on.
Yeah, they are using them "everywhere". Most modern TDM plugins [and even RTAS] are double precision, especially including third party ones.
Old 25th March 2011
  #144
Lives for gear
 

This whole argument is a bit silly. Digital recordings today don't sound anywhere near as good as digital recordings from 20 years ago. So yes, the minutia of fixed vs floating obviously makes some difference, but in the face of the 10,000 or so downgrades we've made because we're recording in cheaper rooms/quantizing bad players/auto tuning the humanity into oblivion/automating the details into flat mush, and then covering the whole thing with a nice layer of dull limiting...I can't see how the difference between fixed and float can be worth this much conflict. No matter what one you use you're going to be shamed by 16 bit digital productions from the 90s simply because producers weren't mandated to suck back then.
Old 25th March 2011
  #145
Lives for gear
 
Cellotron's Avatar
 

Verified Member
Quote:
Originally Posted by Cheebs Goat View Post
This whole argument is a bit silly. Digital recordings today don't sound anywhere near as good as digital recordings from 20 years ago. So yes, the minutia of fixed vs floating obviously makes some difference, but in the face of the 10,000 or so downgrades we've made because we're recording in cheaper rooms/quantizing bad players/auto tuning the humanity into oblivion/automating the details into flat mush, and then covering the whole thing with a nice layer of dull limiting...I can't see how the difference between fixed and float can be worth this much conflict. No matter what one you use you're going to be shamed by 16 bit digital productions from the 90s simply because producers weren't mandated to suck back then.
I recently became obsessed with Talk Talk's "Laughing Stock" - immaculately recorded by Phill Brown over an arduous and exacting 8 month period - and mastered in fact 20 years ago in 1991 - and it is in fact a perfect example about what you are talking about. It sounds ridiculously great on CD and puts most current recordings to shame.

In essence: the problem is not 16bit/44.1kHz PCM in and of itself. The problem is what we are currently way too often doing (or not doing) during the recording, mixing and mastering for this format!

Best regards,
Steve Berson
Old 25th March 2011
  #146
Gear Addict
 

Quote:
Originally Posted by Cheebs Goat View Post
This whole argument is a bit silly. Digital recordings today don't sound anywhere near as good as digital recordings from 20 years ago. So yes, the minutia of fixed vs floating obviously makes some difference, but in the face of the 10,000 or so downgrades we've made because we're recording in cheaper rooms/quantizing bad players/auto tuning the humanity into oblivion/automating the details into flat mush, and then covering the whole thing with a nice layer of dull limiting...I can't see how the difference between fixed and float can be worth this much conflict. No matter what one you use you're going to be shamed by 16 bit digital productions from the 90s simply because producers weren't mandated to suck back then.

With all due respect, speak for yourself. I have not been "mandated to suck", and my digital recordings sound every bit as good, or better, than those "from 20 years ago". Methinks you are "over-generalizing".

Some things I am not doing [partial list]

1. recording in cheaper rooms
2. quantizing bad players
4. autotuning the humanity into oblivion
5. automating the details into flat mush
6. covering the whole thing with a nice layer of dull limiting.
Old 25th March 2011
  #147
Lives for gear
 
Cellotron's Avatar
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
With all due respect, speak for yourself. I have not been "mandated to suck", and my digital recordings sound every bit as good, or better, than those "from 20 years ago".
Post links to the releases or it didn't happen.


Best regards,
Steve Berson
Old 25th March 2011
  #148
Lives for gear
 
Cellotron's Avatar
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
Attached is the Rane Fixed vs. Floating Note:
Definitely an article well worth reading.
Think I first read that when Bob Lentini (author of the SAW apps who chose fixed point over floating point for the vast majority of its internal math) linked to it about 8 years ago or so.

Best regards,
Steve Berson
Old 25th March 2011
  #149
Lives for gear
 

Verified Member
Quote:
Originally Posted by editronmaximon View Post
Relevant Excerpts From Rand Fixed vs Float Note
... followed by a lot of stuff which I already know.

I've already acknowledged several times that 48 bit fixed point can usually be done betterversus 32 bit float..

Having actually ported certain codecs to a fixed point processor, I think I can say with authority that 48 bit fixed point isn't ALWAYS superior.

And you certainly can't make the blanket statement that fixed point wins over floating point, "hands down", because as I've repeatedly pointed out, there are too many variables in that statement.

If you want to say "fixed point using 48 bit calculations and 24 bit storage and transfercan generally be made superior to using 32 bit floating point throughout" then that's fine (though you'd better get the gain staging right, otherwise that 24 bit storage and transfer could give you issues)

But then switch to using 64 bit floating point for calculations and the situation reverses.

And in truth there are a lot of processes that can be done where you won't notice the difference going above 24 bit fixed or 32 bit float, given a good DSP programmer.

Note the difference here, the totality iof your knowledge is your understanding of people you're quoting off the web (often very clearly a misunderstanding, you regularly draw incorrect conclusions from correct data) , I've actually done the stuff you're arguing with me about. I've done DSP in floating poinit, and fixed point, with various word widths. Try getting reasonable mp3 decode quality out of processor with a 16 bit multiplier and limited cycles and you'll get to know something about number formats and audio quality.... so, how much DSP work have you done? How much programming of any kind? I think I can guess the answer.

Your lifetimes experience is I suspect, less than I've done this morning, and I haven't even had breakfast yet.

It also amuses me that you accuse me of having an agenda, yet so far every source you've quoted as authority has been one in a position of trying to sell their hardware or services, however I don't, on the whole, disagree with them in their specific evaluations (for example 48 bit fixed algorithms versus 32 bit float), what I disagree with is the generality of your interpretations and declarations.

Anyway, I think I need to waste less time trying to educate you, I have more interesting and worthwhile things to do, like developing DSP code, but if anyone else has questions, fire away.
Old 25th March 2011
  #150
Lives for gear
 

Verified Member
Quote:
Originally Posted by Cellotron View Post
Definitely an article well worth reading.
Think I first read that when Bob Lentini (author of the SAW apps who chose fixed point over floating point for the vast majority of its internal math) linked to it about 8 years ago or so.

Best regards,
Steve Berson
Worth reading, but it's rather weighted.

For example in the example they choose to use they talk about two mythical 3 digit decimal processors, one using one of those digits as an exponent for float, and one using them all for a fixed point representation (the same thing I described a few posts earlier).

In this circumstance the floating point processor loses one digit of precision in exchange for the scaling, as already stated by me earlier, so no argument there.

But the whole text is talking about 24 bit fixed point processors versus 32 bit float, so the fair analogy would be for a three digit fixed point processors versus a FOUR digit floating point one, in which case the floating point one would have as many digits for precision as the fixed point one... and their attempt to show the superiority of fixed versus float in that particular instance would be rather less effective.

They also talk about accumulation of long filter lines, and how small values can be lost in float... but there is an assumption in that argument, which is that the fixed point calculations are using a larger accumulator than 24 bits, whereas the float calculations are accumulated at 32 bits.

But floating point DSPs typically have larger accumulators, just like fixed, and native systems can easily use a 64 bit double for that accumulation. Thus making that specific argument null and void in those cases.

In short the paper isn't incorrect, but it is guilty of spin by omission.
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Similar Threads
Thread
Thread Starter / Forum
Replies
speerchucker / Music Computers
2
redrue / Geekslutz forum
3
BevvyB / Gear free zone - shoot the breeze
13
Tetness / Gearslutz Secondhand Gear Classifieds
2
Phantom / So much gear, so little time
12

Forum Jump
Forum Jump