The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
is worthwhile recording at 32 bits ? Effects Pedals, Units & Accessories
Old 2nd September 2008
  #61
Gear Addict
 

Quote:
Originally Posted by psycho_monkey View Post
Well, if PT is 48 bit fixed...then a 32 bit fixed system would obviously have less processing headroom than a 48 bit fixed system! the same way a 16 bit system has less dynamic range than a 24 bit system.

Whether 32 bit float is better than 48 bit fixed is open to debate...my knowledge bottoms out here I'm afraid, although I've heard arguments for and against both.
Ah, thats right.... I am sorry, I am talking about the wrong thing here. That I knew. And boy do I remember them debates back a few years ago. That has to do with the mixer/summing more right ? I did not mean those "bits" when I worded my question.

What I meant by my first question was, "Aren't you able to record in 32Bit / 384k with the Pyramix and Sadie systems ?"
Old 2nd September 2008
  #62
Gear Addict
 

Quote:
Originally Posted by Infa View Post
Ah, thats right.... I am sorry, I am talking about the wrong thing here. That I knew. And boy do I remember them debates back a few years ago. That has to do with the mixer/summing more right ? I did not mean those "bits" when I worded my question.

What I meant by my first question was, "Aren't you able to record in 32Bit / 384k with the Pyramix and Sadie systems ?"
Hi Infa,

It may serve you well to make a distinction between “conversion bits” and “processing bits”.

The music itself (which is an electric signal all the way from the mic output to the speaker or headphone) can not be 32 bits, nor can it be 24 bits, nor should it be. The ear does not respond to 144dB dynamic range. That is a fact, regardless of the format. The concept of dynamic range and “conversion bits” are tied together. As was stated here, there is a limitation due to component and circuit self generated noise, which sets the limit of dynamic range.

But one can “park” a 16 bits signal (or a 20 bits signal) in a DAW. Think of a number 62. One could write it as 062 and it is still the same number. Why do such a silly thing? Why should we allocate 3 digits for the expression of 2 digits? Say I want to “add a gain” of X10, so the number will become 620.

But if you want to amplify 62 by 20, which is 1240 and that calls for one more digits. If all you have is 3 digits, the result 1240 will become 240. That is a huge error. So you need more digits (processing bits).

Similarly, if you try to divide (attenuate) the number 62 by say 3, the result is 20.666666…. which calls for a lot more digits (or “space”).

Say you have 16 channels, and you want to add them. That adds up to a bigger number, thus more “space” on the left side. Now take some of the channels and attenuate, which calls for more digits (space) on the right side… Also note that some processing (such as EQ boosts or reductions) call for more space…

The DAW provides “a lot of digits” on both sides of the conversion digits (processing bits). The user does not have to worry about running out of digits on either side. The analogy for a DAW is a large “scratch pad”, providing a lot of area to insure that whatever you do (moving things around, blowing them up, merging and so on), you never run out of the page. The idea is to never get to the edge of the page.

But when your processing is all done, you need to make a final product. You now need to get back to the real world of “limited size scratch pad”. You need to take what the ear can hear (certainly not 24 bits), and what real hardware can play (also not real 24 bits). If your final format is say a CD, you need to take the 16 most significant bits only. If your final product outcome (or archive) is 24 bits, you need to lop the DAW outcome to 24 bits… Doing so, it is best to look at dither with noise shaping, but that is another whole subject.

So the answer is NO. You can take a signal, and pad it with zeros on both sides, for the sake of doing processing. But there is no way and no processing that will eliminate the noise that came into the DAW, and there is no point in feeding more bits to a DA then what is dictated by the limitation of the ear or the DA itself.

I hope my examples help you understand it. Note that while my examples used common decimal digits (such as 62); the same concepts apply for binary numbers made out of 0’s and 1’s.

Regards
Dan Lavry
Old 2nd September 2008
  #63
Lives for gear
 
JoeyM's Avatar
Hey Dan (you rock of course and I loved the TapeOp interview),

If someone has no boutique preamp (yet good ADs), would it be better to

1. Run a microphone (like K2 with power supply) straight into the AD converters at the low signal strength, or

2. Use a less boutique preamp so the ADs have more analog gain?

It's obvious either way can have downer side effects, but I'd love to hear your take on the gains (no pun intended).
Old 2nd September 2008
  #64
Gear Addict
 

Quote:
Originally Posted by Dan Lavry View Post
Hi Infa,

***REFER TO HIS ORIGINAL POST ABOVE***

Regards
Dan Lavry
Thanks Dan.. Yea, the does clear things up quite a "BIT". - LOL- My "dynamic range" of understanding this subject is at 144db !! heh

Seriously though, thanks for your time on this subject, and helping me understand this Bit thing more.

Was wondering though... As this question of mine never seems to get addressed. Are they ever in the future (immediate or way down the road) - going to be able to "apply" Bits to something else in the audio besides dynamic range ? Then the reason for more bits would make sense (the "bottom 10" or whatever) could in theory be applied to some other attribute in audio to help the digital realm "interpolate" it better . Apply them to some other attribute in music that helps the digital realm "interpolate" it better ? I mean like EVER down the road ? Kinda like you know how at first video came out going to TV it was 3 RCA jacks right ? 1 was Video (lets say metaphor for bits), 1 was audio L and the other was audio R. At the time everyone probably thought that was it and the best it can get.... BUT then they figured out how to divide that SAME video signal (metaphor for bits here) into the component cable where now it is 3 video cables and 2 audio cables. So they learned how to divide the video signal into its 3 most important parts, and let them have their own dedicated cable and connection for signal. Then the video players had to get a "engine" in a sense for EACH of them important components of the video signal. The outcome was a far superior Video picture !! COULD ever in life that eventually happen with audio ? Break it down to it most important components, and THEN have each of those have their own bit engine or whatever to make up a FAR better capture... Do you understand what I am asking here ? Whats your take on that possibility ? And maybe its not even Bits that would be the "engine" for every component,, but something. So I am not stuck on bits here... but they do seem to always be the "engine" for interpolating things I notice. But the main point would be the possibility to break audio down farther than it is now, and give every one of them broken down components its own "engine" and cable and converter or whatever to give a end result of a way higher quality piece of audio capture.

Complete other subject here, BUT lets switch from Bits to SR:

I know the discussions about how 96k is already overkill, and 192k is unneeded. But then you got DAD converters going up to 384k !! I mean why would that be available and people use them regularly unless there was something they actually were getting out of the 384k ? Myself I like to use 192k when archiving stuff off of my ATR 1" stereo Reel to Reel tape machine. I do notice a difference I like between the capture comparisons of the 192k captures vs the 96k captures. And it is not a "additive" there of something that wasn't there in the first place (that would be a unhonest sound and I wouldn't like that) - but it is more of a "capturing the air around the audible frequencies causing them to sound like they do when I am listening to the tape". The 96k captures, seem like that "air" was no captured (because it is inaudible anyway) - BUT in turn that caused the AUDIBLE frequencies to sound different/be interpolated differently by the human ear. The old "residual effect" thing. Now in your professional opinion, what do you feel about that ? Totally impossible ? Plausible ? Because I said it here before, but no one addressed the possibility.... Do you believe the "Neighboring frequencies effect other frequencies" theory ? SO in turn, a INAUDIBLE neighboring frequency could in turn actually effect a AUDIBLE frequency. ?? Yes ? No ? Maybe so ?

I would love to hear your thoughts on this.

And thanks for your time.
Old 2nd September 2008
  #65
Gear Addict
 

Quote:
Originally Posted by JoeyM View Post
Hey Dan (you rock of course and I loved the TapeOp interview),

If someone has no boutique preamp (yet good ADs), would it be better to

1. Run a microphone (like K2 with power supply) straight into the AD converters at the low signal strength, or

2. Use a less boutique preamp so the ADs have more analog gain?

It's obvious either way can have downer side effects, but I'd love to hear your take on the gains (no pun intended).
It really depends on a lot of variables:

You need to know the performance of the AD. Say the AD offers some internal gain. How much does such feature cost you in terms of distortion? How much does it cost you in terms of noise (dynamic range)? My LavryBlack AD has a built in 0-13dB worth of gain in 1dB steps. At maximum gain of 13dB you lose almost nothing in terms of distortions and noise (compared to 0dB gain). But say I wanted to push the gain to say 20 or more dB of gain, then the dynamic range (and possibly the distortions) limitations inside the AD would suffer. I just wanted the AD10 to be rock solid and clean, but that of course moves the gain issues elsewhere, to the micpre.

So you now need to know the limitations of the micpre. Very often there is some tradeoff between dynamic range and distortions. As a rule, micpre's introduce less distortions when set to lower gain. Always, micpres introduce more noise as you set them to higher gain. So in some sense, from distortions and noise standpoint, it is best to use micpres at as low gain setting as possible. But that would defeat the purpose of a micpre, which is there to amplify the signals.

What am I getting at? The third consideration (besides the AD and the micpre) has to do with how much signal you get from the microphone (mic type and mic distance from sound source). It is one thing to have a singer sing loudly into a hand held high efficiency condenser mic (the signal would be very strong). It is another thing to have say a ribbon located far away, yielding a weak signal...

So there is no single answer to your questions. There are at least 4 variables (mic, mic location, micpre and AD) to consider. I can say that in the case of my converters (LavryBlack, LavryBlue and LavryGold AD's) I would tend to utilize the internal AD gain, thus making the life of the micpre easier by around 13dB. That would translate to an almost direct 13dB better overall dynamic range. The same is true for some AD gear on the market, but other AD gear may not follow that rule.

Of course, there are many cases where it is advantageous to set the AD to minimum gain (0dB gain). For example, if you want to send a signal to the AD on a cable over long distance, having a 24dBu signal source (pro level), will serve you very well in terms of rejection of interference due to external noise (radio, AC hum and so on) as well as grounding issue. So noise and distance considerations may be the dictating factors in some cases, opposite to my previous comment....

Your question is not easy to answer, and I barely touch on it. A complete answer would take many pages and a lot of time, but I believe that my answer covers the main points.

Regards
Dan Lavry
Old 2nd September 2008
  #66
Gear Addict
 

Quote:
Originally Posted by Infa View Post
Thanks Dan.. Yea, the does clear things up quite a "BIT". - LOL- My "dynamic range" of understanding this subject is at 144db !! heh

Seriously though, thanks for your time on this subject, and helping me understand this Bit thing more.

Was wondering though... As this question of mine never seems to get addressed. Are they ever in the future (immediate or way down the road) - going to be able to "apply" Bits to something else in the audio besides dynamic range ? Then the reason for more bits would make sense (the "bottom 10" or whatever) could in theory be applied to some other attribute in audio to help the digital realm "interpolate" it better . Apply them to some other attribute in music that helps the digital realm "interpolate" it better ? I mean like EVER down the road ? Kinda like you know how at first video came out going to TV it was 3 RCA jacks right ? 1 was Video (lets say metaphor for bits), 1 was audio L and the other was audio R. At the time everyone probably thought that was it and the best it can get.... BUT then they figured out how to divide that SAME video signal (metaphor for bits here) into the component cable where now it is 3 video cables and 2 audio cables. So they learned how to divide the video signal into its 3 most important parts, and let them have their own dedicated cable and connection for signal. Then the video players had to get a "engine" in a sense for EACH of them important components of the video signal. The outcome was a far superior Video picture !! COULD ever in life that eventually happen with audio ? Break it down to it most important components, and THEN have each of those have their own bit engine or whatever to make up a FAR better capture... Do you understand what I am asking here ? Whats your take on that possibility ? And maybe its not even Bits that would be the "engine" for every component,, but something. So I am not stuck on bits here... but they do seem to always be the "engine" for interpolating things I notice. But the main point would be the possibility to break audio down farther than it is now, and give every one of them broken down components its own "engine" and cable and converter or whatever to give a end result of a way higher quality piece of audio capture.

Complete other subject here, BUT lets switch from Bits to SR:

I know the discussions about how 96k is already overkill, and 192k is unneeded. But then you got DAD converters going up to 384k !! I mean why would that be available and people use them regularly unless there was something they actually were getting out of the 384k ? Myself I like to use 192k when archiving stuff off of my ATR 1" stereo Reel to Reel tape machine. I do notice a difference I like between the capture comparisons of the 192k captures vs the 96k captures. And it is not a "additive" there of something that wasn't there in the first place (that would be a unhonest sound and I wouldn't like that) - but it is more of a "capturing the air around the audible frequencies causing them to sound like they do when I am listening to the tape". The 96k captures, seem like that "air" was no captured (because it is inaudible anyway) - BUT in turn that caused the AUDIBLE frequencies to sound different/be interpolated differently by the human ear. The old "residual effect" thing. Now in your professional opinion, what do you feel about that ? Totally impossible ? Plausible ? Because I said it here before, but no one addressed the possibility.... Do you believe the "Neighboring frequencies effect other frequencies" theory ? SO in turn, a INAUDIBLE neighboring frequency could in turn actually effect a AUDIBLE frequency. ?? Yes ? No ? Maybe so ?

I would love to hear your thoughts on this.

And thanks for your time.
Read my paper "Sampling Theory", and then, if you do not understand it, I will try and answer your question. You can find the paper at my website:
Lavry Engineering - Unsurpassed Excellence under support.

It is a long paper, but I kept the math out of it, and included a lot of graphic plots.
The idea is to explain the subject for people that are not into math and engineering.

Regards
Dan Lavry
Old 3rd September 2008
  #67
Gear Addict
 

Quote:
Originally Posted by Infa View Post
Thanks Dan.. Yea, the does clear things up quite a "BIT". - LOL- My "dynamic range" of understanding this subject is at 144db !! heh

Seriously though, thanks for your time on this subject, and helping me understand this Bit thing more.

Was wondering though... As this question of mine never seems to get addressed. Are they ever in the future (immediate or way down the road) - going to be able to "apply" Bits to something else in the audio besides dynamic range ? Then the reason for more bits would make sense (the "bottom 10" or whatever) could in theory be applied to some other attribute in audio to help the digital realm "interpolate" it better .....
I would love to hear your thoughts on this.

And thanks for your time.
Hello Infa.

More bits are not only for higher dynamic range. More (good) bits are also for better sound (less distortions). The explanation is a “bit” complex -

For a more detailed answer, go to my forum at Lavry Engineering - Unsurpassed Excellence
I just posted a pretty long "intuitive" explanation titled

"More (good) bits are not only for better dynamic range".

I am not sure your question and my answer have much to do with the thread (32 bits), but it is OK with me if you (or anyone) wish to stir the conversation to "the reason for more bits". I think it is a good subject; there is whole lot that can be said about conversion other then the number of bits. Of course as a designer I keep much of my thoughts and findings to myself, but I can share some of the less proprietary information, such as the basic principles. I also realize that what interests me greatly may be boring for others.

If you wish, you are welcome to post a copy of my comments here. A link will not work because reading my forum requires one to register.

Regards
Dan Lavry
Old 3rd September 2008
  #68
hv_
Gear Nut
 

Seems to me that the simplistic analysis is, if the converter's only putting out a 24 bits in fixed format, accuracy questions aside, what advantage could there possibly be to write it to disk on the fly in some other larger format? To save your daw a conversion step later? I would think you'd want to square it away as quickly and efficiently as possible during the critical recording stage. Also, 32-bit floating point format isn't exactly state of the art for daw processing these days.

It all makes me wonder how awful it might sound if a converter left off the decimation filter and accumulated all the junk bits to output a fully populated 32-bit (or higher) float?

Howard
Old 4th September 2008
  #69
Gear Addict
 

Quote:
Originally Posted by hv_ View Post
Seems to me that the simplistic analysis is, if the converter's only putting out a 24 bits in fixed format, accuracy questions aside, what advantage could there possibly be to write it to disk on the fly in some other larger format? To save your daw a conversion step later? I would think you'd want to square it away as quickly and efficiently as possible during the critical recording stage. Also, 32-bit floating point format isn't exactly state of the art for daw processing these days.

It all makes me wonder how awful it might sound if a converter left off the decimation filter and accumulated all the junk bits to output a fully populated 32-bit (or higher) float?

Howard
Howard,

I do not write software for a DAW, but if I did I would certainly try NOT to write 24 bits of audio sample tracks as 48 bits, or as 64 bits. The need for the many extra bits is for processing, and that does not automatically mean that each track has to be stored as huge long words. Say you want to add 16 channels of 24 bits each. You can store them as 24 bits each. Now, when you do the addition, you call one sample from each track (each one is 24 bits), and add them. For that you do need more bits. But now that you are done, you may store the temporary OUTCOME on single extra wide track (2 tracks for stereo, 5-7 for surround),
or in some cases you may even choose to reduce the outcomes back to 24 bits...

Yes my explanation was simplistic, and it only touched the basic ideas. What do you expect from a single post? It can not be a substitute for a sets of operation manuals of the many DAW on the market. But your understanding of the explanation was a bit simplistic as well. No offense intended. You need more bits for processing, but that does not mean that each track must be padded with 24 additional bits prior to the processing. The "padding" is done "on the fly", as needed and when needed.

You said: "It all makes me wonder how awful it might sound if a converter left off the decimation filter and accumulated all the junk bits to output a fully populated 32-bit (or higher) float?"

That concept is pretty old. DSD "parked" a single bit at very high speed, and now DXD parks even more such data, in fact huge amount of data! There are many issues when doing so, and indeed the processing of such data requires huge amount of processing power. In fact, much of the DSD processing was done by converting the DSD to PCM, doing EQ and so on, and then converting it back to DSD. As far as I am concerned, all the objections against PCM and all the claims for the advantages of DSD fall down when one does that. Most often, the "marketing departments" forgot to mention that such is the case. Again, doing a direct EQ on DSD (or many other processing) takes HUGE amount of DSP power. I am yet to see any technical reason pointing to shortcomings of PCM. You can always find people that like the specific sound of one thing or another, but that is not how one should adopt a technical concept or a basic concept.

Regards
Dan Lavry
Old 4th September 2008
  #70
hv_
Gear Nut
 

Hi, Dan. I think you may have misunderstood my comment above. I was referring to my own analysis as simplistic, not yours. I really wouldn't be so bold as to characterize your analysis, most of which is way over my head, as simple. In fact, if what I said was in any way a restatement or summary of your analysis, that's a revelation to me all by itself.

I was only kidding about an ADC outputting higher res with all the accumulated oversample error bits. DXD really does that with a little EQ? Kind of opens up another related can of worms.... the wisdom of tracking with one of the new DSD recorders. I gather you wouldn't consider that a good move compared to fixed format recording with a premium 24-bit converter either.

Howard
Old 5th September 2008
  #71
Gear Addict
 

Quote:
Originally Posted by hv_ View Post
Hi, Dan. I think you may have misunderstood my comment above. I was referring to my own analysis as simplistic, not yours. I really wouldn't be so bold as to characterize your analysis, most of which is way over my head, as simple. In fact, if what I said was in any way a restatement or summary of your analysis, that's a revelation to me all by itself.

I was only kidding about an ADC outputting higher res with all the accumulated oversample error bits. DXD really does that with a little EQ? Kind of opens up another related can of worms.... the wisdom of tracking with one of the new DSD recorders. I gather you wouldn't consider that a good move compared to fixed format recording with a premium 24-bit converter either.

Howard
Hello Howard.

Indeed I misunderstood your comment. Sorry about it.

I am not saying that all the DSD and DXD gear converts the audio to PCM for processing, and convert it back to DSD or DXD. But MUCH of that gear does just that, which would be find if the same folks did not try to promote their format by bashing PCM while using it.

When Sony and later Philips came out with DSD, one of the huge difficulties was the processing. I saw much of the hardware (some of it at Sony, some on display at AES conventions), and a simple EQ could require a very costly large size, dedicated hardware. Same for compressor, limiter and what not. I just did not see the point in being surrounded by "refrigerator size pieces of gear" for audio processing. I do understand why the hardware is so intensive.

First, there is tons of data that needs to be stored and processed. A 1 bit at 5.6MHz you say? That is around .7 mega bytes of data per second for a single channel. Compare that to say 24bits at 96KHz, which is .288 mega bytes per second per channel. A 44.1KHz at 16 bits is only 0.088 megabytes per second.
This is just the beginning. Now take DXD with 8 bits at 5.6MHz which calls for 5.6 megabytes per second...

So say you agree to accommodate such a space hog. Say you have a DSD (1 bit) file, thus the samples are numerous, but each sample has a value of either 0 or 1. Now you want to do the simplest of operations - attenuate by say 1dB. That means you have to multiply each sample value by .891 and while each zero is still a zero, each one are now .981251, and your data is 6 times bigger. So let’s do it some justice and limit the resolution to 4 digits thus .9812. You really need at least 4 digits to have a .1dB resolution at say -20dB attenuation...

Say you want to take another track (with 0's and 1's), and you want to boost it by 1dB. The second track is now made of zeros but each one is now substituted by 1.220.

So where is the 1 bit DSD? The whole concept was supposed to be about 1 bit, and the vales .9812 and 1.220 is no longer a one bit.

Now say you just wanted to add 2 DSD channels, no gain, and no attenuation. At each sample time you can have one of 3 possibilities:
A. Both samples are 0 thus the sum is zero. 0+0=0
B. One sample is 0 and the other is 1 thus the outcome 0+1=1
C, both samples are 1 thus the outcome is 1+1=2

So again, we lost the DSD. Our data is no longer 0's and 1's.

The examples hers are the simplest. Say you wanted to add the two tracks above, you end up with
0 or .981, or 1.220 or .981+1.220= 2.201. This is starting to look a lot more like multibit then DSD. In other words it is starting to look sort of like PCM but with huge data rates.

And I did not even begin! Let’s take an example of a low frequency EQ. It is not easy to explain to a non mathematician, but I will try. I like the challenge of trying to make complex things more intuitive. Say you have a 100Hz sine wave. That wave changes very slowly. A whole cycle lasts 10 msec. The neighborhood of the peak of such sine wave is almost a constant (hardly changes) during a whole 1 msec time, thus there is not much information about the wave form. If you wanted to say "register" a whole cycle, you need to 10 msec of data.
On the other hand, the same 1 msec of data will register 10 complete cycles of a 10KHz tone, thus plenty of information. Note that I did not yet mentioned sample rate. So far we are talking about an analog signal and the amount of information contained in some time slice (such as 1 msec in our example).

Now let’s enter the sample rate variable into the discussion. A 1 msec time slice when sampling at say 96KHz means 96 samples. But when sampling at say 5.6MHz, the same 1msec calls for 5600 samples! That is around 58 times more samples, more multiply accumulate signal processing operations for handling the same time slice.

Now, the initial knee jerk reaction would be to say: but the data is simpler, it is made out of 1's and 0's. Well, first, it is not! Remember the addition and the attenuation as explained above? Second, all 5600 of the filter coefficients must be very accurate. About as accurate as the 96 samples of the PCM 96 coefficients.
That makes FIR filtering almost impossible, and with it goes the advantage of linear phase for audio... But you may need a compressor or a reverb, and the problem shows up again and again.

Now, what does one do with such outcome? You want DSD? then you need to "reformat" it to a 1 bit at 64fs, or 1 bit at 128fs in the case of 5.6MHZ. So you send the data through a digital noise shaper that is emulating another DSD converter. That is, of course, another huge task, and with it additional reduction in dynamic range and some added distortions...

Or you can convert the DSD to PCM, use standard PCM processing and then you still need to go through a final noise shaper (digital emulation of a DSD AD), with some performance degradation. It is true that DXD has less degradation, but at a huge price in terms of data size and required processing.

Of course you could get the same or better results with PCM. So where did it all come from? The idea of single bit at high speed was done for hardware reasons. A single comparator offers some advantages to makers of AD and DA IC's. First there was the 1 bit at 64fs, converter front end, followed by "on chip" decimator, which immediately converted the data to PCM. But then, given some new technology methods, came the multibit AD IC's with the built in decimator (thus PCM output. It may be easier to design a converter by use of a LOCAL few bits at very high speed (such as 5 bits at 64-1024fs), but the IC makers include the decimation to PCM, and the output is 24bits at say 44.1 or 96KHz. Such a scheme certainly makes the design of anti aliasing filter much easier and better. A multibit AD at say 128fs (or 512fs or 1024fs) yields much better performance then a single bit, and that is exactly why the multibits took over.

But setting aside the internal working of an AD or DA converter, which is the concern of the maker of conversion hardware, I do not see a single reason why the converted data should be anything other then PCM. PCM offers the most efficient coding scheme for digital data as we know it (a number scheme based on 2 states- 0 or 1, true or false. yes or no). Given a desired bandwidth, and a desired accuracy, the PCM code is 100% efficient; all the possible states have a meaning. 16 bits offer 65536 distinct states. 24 bits offer around 16.8 million states... DSD and DXD are a lot less efficient (as stated above).

So is there a sonic advantage? I do not see why there would be. Certainly there is no conceptual reason for it. As I stated before, you can always find some folks that would swear that something sound better, and often other folks that would swear the opposite. I can not and do not want to get into what sounds better. The problem comes when a listener to a particular specific gear takes a huge leap to conclude that what they like has implications for all other cases. A poor analog front end on a DSD AD is no basis to conclude that DSD can not sound good. A great implementation of some class AB power amplifier is no reason to make statements about class AB amplifiers.... Unfortunately, with some advertizing money and some marketing, such practices are not too uncommon.

Do you still think that we need a whole thread about DSD and DXD? I think I covered it pretty well.

Some years ago, I was invited by Sony to show a DSD DA at their booth at AES and other shows. I wasted a lot of time learning about that stuff, and I did have a pretty good working prototype. It sure took a lot of time and effort, but I decided not to show it. Why? Very simple: My parallel efforts in the "PCM department" yielded much better results. Now that DSD is no longer a supported and promoted format by Sony (not for a number of years), I am glad I did not peruse it further.

Long live PCM.

Regards
Dan Lavry
Old 5th September 2008
  #72
Lives for gear
 

Quote:
Originally Posted by jmarkham View Post
Wouldn't the extra 8 bits give you not only increased dynamic range
but also finer gradation of amplitude as well? I don't know much
material that gonna swing 200db ;-) .. but maybe the finer
gradation of the dynamic range would be higher fidelity?

That assumes you're going into a 32 or 64 pipeline as well.

jeff
No. No finer gradation than 24 bit is possible because of thermal noise in the converter. The last 2 bits will always be garbage on any 24 bit sample so if you add 8 more bits, that would make 10 bits of garbage noise in each sample. It would be a finer gradation of random noise, but not any significant signal infos. That wouldn't be higher fidelity to the original signal.
Old 5th September 2008
  #73
Lives for gear
 
haryy's Avatar
I agree with jmarkham
If the sound occupies the "higher" bits, it will have more accuracy in order to describe better the minute volume changes which happen all the time, isn't that the case?

PS. It's more of a question rather than a solid opinion of mine.
Old 5th September 2008
  #74
Lives for gear
 

Quote:
True and real 32 bit (or even higher) Converters, DAW's and Plug Ins, therefore recordings SHOULD come out eventually.. and they better for that matter or I will be let down by this crumbling world of crap, and "its good enough" theories.
I wouldn't be at all surprised if 32-bit converters do come out (DAWs and plugins already operate at 32-bit precision or more, but processing is different than capturing), but the "true and real" part...those extra bits will just be what people around here often refer to as "marketing bits". People will buy them because 32 is more than 24 so it must be better, right? Well, no...a given 32-bt converter may be better than another 24-bit converter, but it won't be because of the extra bits.

Just because there's no need for 32-bit converters doesn't mean that today's converters are "good enough"...they still can get better. It just doesn't have to happen by adding more bits. Even today's converters running at 16/44.1 sound much better than they did ten or fifteen years ago, don't they?

Quote:
People (yes even the great white paper written on this subject) can sit around and preach to me until their blue in the face about how we can't audibly hear past 96k.....and a dynamic range of 144db will never be heard or utilized, etc...and so it all is unnecessary to move forward in technology this way in the industry.
Some things are just not open to debate. It is a FACT that nobody can hear 144 dB of dynamic range. Even if you can withstand higher SPLs than your neighbor, you won't be able to listen hear quieter signals immediately afterwards. Your ears won't let you. And as has already been pointed out, the thermal noise of analog electronics won't allow more than about 120 dB dynamic range.

As for hearing past 96 kHz, nobody can (and nobody can hear past 48 kHz either). If a 96 kHz converter sounds better than a 48 kHz converter it's not because you're "sensing" things you can't hear. It's because something about the converters...probably their filters...affect the frequencies you can year differently. It's not hard to find converters that sound better at 48 kHz than 96 kHz...it's all about the implementation.

Again, I don't think that anyone is saying that we wouldn't move forward...just that moving forward doesn't necessarily equate to adding more bits and raising sampling rates.

Quote:
BUT PLEASE everyone, think about this so the world can change FORWARD for the better: It's what you CAN'T hear in music/audio recordings that makes that music/audio so great and x better than y, etc.... Proof of point - I say it like this, neighboring frequencies effecting other frequencies. Inaudible frequencies can play a big part of how a audible frequency is heard and interoperated by the listener. (of course not to the limited minded person though, and they will deny this until their grave).
What does "interoperated" mean?

Inaudible neighboring frequencies do not affect what we can hear. However, electronically they may...which actually could be used as an argument against higher sampling rates.

Quote:
Problem is, is if none of us believe in good reasons to move forward in this industry with higher bit depth, sample rates, and quality in general, then the companies that are in charge will not even TRY to invent ways to make it possible, then affordable and possible, then affordable possible AND efficient.
I think everyone wants to move forward with higher quality in general. I think that the truly forward-thinking companies will be the ones who do it in another way than raising bit depth and sampling rates. That's actually hard to do...again, it's easy to convince people that more always equals better. And that's not necessarily true.

Quote:
Wouldn't the extra 8 bits give you not only increased dynamic range but also finer gradation of amplitude as well? I don't know much material that gonna swing 200db ;-) .. but maybe the finer gradation of the dynamic range would be higher fidelity?
No...adding bits increases dynamic range by lowering distortion (which is how that extra "gradation" you mention manifests itself) but once you get down to the point where it's below the noise floor of the system it's not getting any better.

[quote]Same goes with 48k... Everyone knowledgeable in this field says 48k was/is beyond our audible detection, so 96k is just ridiculous overkill... well..... When I run my same tests I can hear a difference in my 96k recordings vs my 48k recordings.(and the difference is 96k sounds better). So WHY then if supposedly we can't detect past 16 bit 48k can I and others hear a difference when the ranges are upped past supposed "overkill" ?[/qoute]
First off, I don't think anyone said we can't "detect" past 16 bits. We can. It's more than sufficient as a delivery format for most music, but it's not "perfect" (even theoretically). It is true that we can't hear frequencies higher than those that a 48kHz sampling rate can capture and reproduce, but that doesn't mean that, depending on the design of a converter, a 96 kHz converter can't sound better than a 48 kHz converter. But if it does, it's because of how it affects the frequencies you hear...not because of the presence of higher frequencies that you're somehow "sensing". There have been plenty of tests done to try to prove that higher-frequency content makes things sound "better" to us and they've all failed.

If I hear a 48 kHz recording that sounds better than a 96 kHz recording of that same performance, what should I conclude from that?

You can certainly say that 96 kHz recording can capture more frequencies than 48 kHz, and you can certainly say that one specific converter running at 96 kHz converter sounds better (subjectively, at least) than a specific converter 48 kHz converter, but you can't say that 96 kHz in and of itself sounds better than 48 kHz because there are too many variables involved.

Quote:
Well did you ever notice how that boost or cut you did at 9k somehow made a audio illusion (or maybe not a illusion) that made other frequencies FAR away sound different now ? That is called "neighboring frequencies effecting other frequencies". And whether it is a audio illusion or not doesn't matter.
Yes, but boosting or cutting at one frequency and having it affect other frequencies is not the same thing as frequencies "in the air" that you can't hear affecting frequencies that you can. If you took an EQ and boosted 27 kHz it would certainly affect what you could hear...and if you recorded that signal at 44.1 kHz you'd still be able to hear the difference. If the frequency was "in the air" then at 44.1 kHz it would be filtered out, but guess what? Your ear would filter it out as well. Again, all filters are not created equal, so again, it may sound different with a 96 kHz converter it it had better filters...but my guess would be that you'd find things to sound better with a Lavry/Apogee/Mytek/insert-your-favorite-compressor-here at 44.1 kHz than an M-Audio/MOTU/Behringer-etc at 96 kHz in most cases.

Quote:
Don't you think as time goes on, the same would happen from supposed over kill 24bit recordings to 32 (or higher) bit recordings ?
Nope. 24 is more than enough, but not overkill...32 is. For processing, no, but that's not what we're talking about here.

Quote:
That's one of the knocks on digital is that it lacks the continuous response of analog .. perhaps finer quantization would bring digital a bit closer.

Perhaps the analogy of visual response vs. audio response is a
non-sequitur.. but I'd sure like to hear it ;-)
There's a huge difference between video and audio, and a good reason why that analogy doesn't work. With video, we're looking at a certain number of frames per second, and each of those frames has thousands or millions of discrete pixels, and our eyes and brains see those things and put them together. With audio, even though discrete steps are being captured, that's not what we're hearing. We are hearing a continuous analog signal. The quantization of the signal that is captured in the time domain is determined by the frequency response of the signal. but what we're hearing isn't that quantized signal. Finer quantization (ie increasing the sampling rate) manifests itself purely as a higher frequency response, but it doesn't make the lower frequencies any more accurate. A 440 Hz sine wave captured at 10 kHz would be no more or less accruate than one captured at 20, 44.1, 96, or 192 kHz.

As far as quantization in the amplitude domain is concerned, that's determined by the number of bits captured. Finer quantization (ie increasing the bit depth) manifests itself purely as increased dynamic range (as has been mentioned, it lowers distortion as well, but it's not really "as well", its the same thing...as the distortion decreases, dynamic range increases). If you capture that 440 Hz sine wave we mentioned earlier at, say, -6 dBFS at 8-bit resolution the sine wave itself it would be no more or less accurate than it would at 12, 16, 20, 24, or 32 bits. The noise would get lower and lower (and, again, once you got to about 20 bits would effectively not change at all because of the noise in the analog circuitry) but the sine wave itself wouldn't change.

Quote:
One thing I can say for sure from learning from history is, things NEVER really reach their plateau.
No disagreement there.

Quote:
Maybe the bits wont be for dynamic range because we don't need that as you say, but maybe it will be for some other audio quality. (if dynamic range lets say is up and down, how about in and out [3D] range and side to side range and the whole cubical circle of range ?) -- Man we never know until it happens, and thats my point.
Dynamic range is up and down, frequency response is side to side...in and out is a stereo phenomenon...this is all stuff that's a given.

Quote:
My question then is this... why can't they apply bits to other things in the audio ? (like I suggested in my above post). Why is it subject to only dynamic range ? Is this where our "mind block" is ? Maybe they should figure out how to apply bits better.
If you understand what bits are you'd understand why what you're suggesting doesn't really make sense. If you want to make things better you'd have to look at things other than bits.

Quote:
The performance is THD+N of -103dB and S/N of 120dB.
That alone shows that you're getting, at best, 24 bits' worth of usable conversion...something plenty of 24-bit converters are already capable of. So those extra eight bits are just marketing bits.

Quote:
Whether 32 bit float is better than 48 bit fixed is open to debate...my knowledge bottoms out here I'm afraid, although I've heard arguments for and against both.
It's all in the implementation, just like the sampling rate thing. It's almost like saying that cars are faster than motorcycles, or vice versa...sure, maybe the fastest car you've driven is faster than the fastest motorcycle you've driven, but the opposite may be true for me. Even if the fastest car on the market right now is faster than the fastest motorcycle, that all may change tomorrow...personally, I will be happy if 24-bit 44.1-kHz converters keep on getting better and better. Why would anyone actually want to have to double up their storage space and halve their bandwidth, even if it keeps getting cheaper and cheaper?

Having said that, if I'm in the market for a converter and the best one I can find at the time sounds better at 96 kHz then that's the rate I'll record at.
Old 5th September 2008
  #75
Gear Addict
 

Quote:
Originally Posted by Duardo View Post
***REFER TO HIS ORIGINAL POST ABOVE***
Duardo, Man, them are some GREAT replies you gave.

Thanks for taking the time to throw your input in here, and mainly focusing on some of my questions I had in here that seemed to be getting ignored.

I can see alot of what you are saying... just one thing. The neighboring frequency thing. As we seem to agree at one point of it in the audible realm (which to me is scientific proof of the others possibility), I still am sticking to the possibilities of a frequency close by the very last audible realm getting effected by a very close neighboring frequency that is just outside of the audible realm.

BUT, that whole subject is technically askew of this main topic, because it relates to SR more than Bits...

Quote:
Originally Posted by Duardo View Post
What does "interoperated" mean?
LOL - I don't know if I am spelling it right, and the spell checker just kept fixing it to that.... I am trying to use the word "interpret" and "interpreted" -- (I just finally looked it up - LOL) --

I was meaning how technically the entire digital realm is the converters "interpretation" of audio. So how do we make it "interpret" audio BETTER ?

So yea, you can now go back up to my post(s) and know everytime you see that funny looking word in there, exchange it for "interpret", "interpreted", "interpretation", etc..... you get the idea...
Old 5th September 2008
  #76
hv_
Gear Nut
 

Quote:
I wouldn't be at all surprised if 32-bit converters do come out (DAWs and plugins already operate at 32-bit precision or more, but processing is different than capturing), but the "true and real" part...those extra bits will just be what people around here often refer to as "marketing bits".
Not only are capturing and processing different, so is the storage format. Capturing uses fixed point and processing uses floating point. 32-bit floating point used by daws being 1 sign bit, 8 exponent bits, and only 23 actual sound bits. So when moving a 0-db sample from a 24-bit converter to a 32-bit daw or plugin... there's no possible extra storage space for any extra bits. In this case, the 8-bit exponent is wasted storage space. And once you start using the exponent space, you get into the dynamic range 24-bit converters can't resolve. So what if converters could actually increase their resolution beyond 24-bits??? Then 32-bit daws would clip on a 25-bit converter's 0-db output unless they tossed the 25th bit. 32-bit floating point is actually the minimum precision needed to input 24-bit sound without throwing any away. Once you start manipulating sound, however, using even more precision sounds better.

My suggestion... get the best 24-bit converter you can afford and increase your daw processing resolution. I use a 64-bit daw and track at 88.2k/24 for 44.1k projects.

Howard
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Similar Threads
Thread
Thread Starter / Forum
Replies
Professor / High end
124
Gabriel Sousa / So much gear, so little time
12
16/44.1 / Mastering forum
39
Stubacca / So much gear, so little time
35

Forum Jump
Forum Jump