The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
2buss comparison: Fatso vs. Oxford Dynamics Dynamics Plugins
Old 9th June 2006
  #31
The Distressor's "daddy"
 
Dave Derr's Avatar
 

FULL SCALE ADC SIGNALS

If you don't go right up to Full Scale on the ADC, YES, you are losing bits and resolution. This is by definition. If the ADC isnt working right, thats another story.

Its quite simple in some ways. Lets say the ADC measures 65000 steps (a 16 bit converter), if you dont use a full scale signal (lets say the peak is 6dB below clipping), you are only using 32000 steps (being 6dB below full scale, this is one bit). That ADC didnt use the maximum number of steps it can measure, so in effect, it was only measuring 32,000 and lost 32,000 steps of resolution. You can NEVER GET THIS BACK.

Its the same as if you take a digital picture and then decide later you wish to zoom up on something that you should have zoomed up on while taking the picture. You are throwing away much of the picture, and much of the resolution of the camera. The result as most of you know, is a grainy picture that will never be as sharp and clear as if you had zoomed up on the item of interest to begin with when taking the pic, and used all the pixels on the thing you wanted to really see. You used less pixels on the important part you wanted to capture.

If you want to use the full resolution of the converter, YOU MUST GO AS CLOSE TO FULL SCALE AS YOU CAN. Otherwise, when you increase the signal later in the digtal domain, its like zooming up on a digital picture where you have thrown away many of the pixels of resolution.
Old 9th June 2006
  #32
Lives for gear
 
norman_nomad's Avatar
Quote:
Originally Posted by Dave Derr
If you don't go right up to Full Scale on the ADC, YES, you are losing bits and resolution. This is by definition. If the ADC isnt working right, thats another story.

Its quite simple in some ways. Lets say the ADC measures 65000 steps (a 16 bit converter), if you dont use a full scale signal (or have a peak just below clipping), maybe you are only using 60000 steps (just an arbitrary number for illustrations sake). That ADC didnt use the maximum number of steps it can measure, so in effect, it was only measuring 60,000 and lost 5000 steps of resolution. You can NEVER GET THIS BACK.

Its the same as if you take a digital picture and then decide later you wish to zoom up on something that you should have zoomed up on while taking the picture. You are throwing away much of the picture, and much of the resolution of the camera. The result as most of you know, is a grainy picture that will never be as sharp and clear as if you had zoomed up on the item of interest to begin with when taking the pic, and used all the pixels on the thing you wanted to really see. You used less pixels on the important part you wanted to capture.

If you want to use the full resolution of the converter, YOU MUST GO AS CLOSE TO FULL SCALE AS YOU CAN. Otherwise, when you increase the signal later in the digtal domain, its like zooming up on a digital picture where you have thrown away many of the pixels of resolution.
This isn't how I understand it if by "resolution" you mean "quality" or "fidelity" ... I thought bit depth was only an indicator of a potential to resolve dynamic range, and as long as your lowest signal lies sufficiently above the noise floor of the converters themselves then the quality won't suffer whether you record your peaks at -30dbfs or -3dbfs...

Old 9th June 2006
  #33
(Without reading anything but the first post), I think "B" is clearer and would probably pick that one. "A" is okay too, though, and I'd guess that one's the Fatso.
Old 9th June 2006
  #34
(Having now read the other posts) maybe I should get me a Fatso.
Old 10th June 2006
  #35
The Distressor's "daddy"
 
Dave Derr's Avatar
 

RECORD FULL SCALE

Quote:
Originally Posted by norman_nomad
This isn't how I understand it if by "resolution" you mean "quality" or "fidelity" ... I thought bit depth was only an indicator of a potential to resolve dynamic range, and as long as your lowest signal lies sufficiently above the noise floor of the converters themselves then the quality won't suffer whether you record your peaks at -30dbfs or -3dbfs...
Eeek. Never record where your peaks are -30dBfs unless you are going for a noisier, lower quality sound. Remember the first bits are your most accurate and important. Each bit gets less accurate. Once you reach the 19th bit on a current 24 bit converter, you are starting to be just noise and distortion. The last 5 bits are close to useless for anything except dither. So if you throw away 30dB of the most important bits, you end up with an "theoretical" 19 bit converter. BUT, since the last 5 bits are questionable like I said, NOW YOU HAVE TURNED YOUR 24 BIT CONVERTER INTO A GOOD 14 BIT CONVERTER. What a waste!

Record right up to Full Scale if you can. The digital converter is best there. If you hear distortion or some degradation there, you need to troubleshoot. Poorly designed software in DAWS does sometimes clip, especially in Plug ins. Often in Protools you should watch or listen for clipping and attenuate the input to a plug in.
Old 10th June 2006
  #36
Lives for gear
 
scott petito's Avatar
 

Dave is Spot on concerning resolution incidentally you also lose resolution when using a plugin compressor...as you decrease the gain via any plug in you decrease bit resolution as well.... George Massenburg Has mentioned this many times I believe a 6 db decrease in gain amounts to a one bit loss .... this in one reason why I use analog compression on the output of my 2buss in fact the fatso has not Left my 2 buss in 3 years....


cheers
SP
Old 10th June 2006
  #37
Lives for gear
 
Jamzone's Avatar
 

"losing a bit here or there or using an old 16 bit converter at 44.1KHz will hardly mean a thing when its over."

Right on it Dave!! The general sound you've created is the important point here, you can't fix anything by processing it in a specific way after the mix is done. Is it **** it will stay **** but in a different colour.

I can relate to the fact that converters may sound more open at lower signal levels. The "bits" you may loose is irrelevant and do not affect the signal nearly as much. Everyone uses 24-bit converters nowadays. 22-bits are quite OK in my opinion.

//Jamzone
Old 16th June 2006
  #38
Lives for gear
 
norman_nomad's Avatar
Quote:
Originally Posted by Dave Derr
Eeek. Never record where your peaks are -30dBfs unless you are going for a noisier, lower quality sound. Remember the first bits are your most accurate and important. Each bit gets less accurate. Once you reach the 19th bit on a current 24 bit converter, you are starting to be just noise and distortion. The last 5 bits are close to useless for anything except dither. So if you throw away 30dB of the most important bits, you end up with an "theoretical" 19 bit converter. BUT, since the last 5 bits are questionable like I said, NOW YOU HAVE TURNED YOUR 24 BIT CONVERTER INTO A GOOD 14 BIT CONVERTER. What a waste!

Record right up to Full Scale if you can. The digital converter is best there. If you hear distortion or some degradation there, you need to troubleshoot.
Yeah... but what's wrong with 14 bits if you're recording something with a dynamic range of 60db? Bits by definition are just a measurement of a potential to capture differences in volume. Right? I haven't encountered the assertion that the lower bits are "less accurate"....

There's of course the whole other gear interfacing issue and gain staging argument - when most converters are calibrated so that 0vu = -18dbfs, you're going to have to push your preamps pretty hard to get close to 0dbfs... sometimes this can sound good, sometimes this can sound choked and distorted.

Quote:
Originally Posted by Dave Derr
Its quite simple in some ways. Lets say the ADC measures 65000 steps (a 16 bit converter), if you dont use a full scale signal (or have a peak just below clipping), maybe you are only using 60000 steps (just an arbitrary number for illustrations sake). That ADC didnt use the maximum number of steps it can measure, so in effect, it was only measuring 60,000 and lost 5000 steps of resolution. You can NEVER GET THIS BACK.
This isn't my understanding. You're equating bit depth with resolution (or quality)... it's not the same as digital photos...

Losing your 5000 steps of resolution only maters if what you're recording requires the FULL DYNAMIC RANGE of your 16 bits converter.... if you're recording distortion guitar with the dynamic range of 6db it shouldn’t matter if you record at -1dbfs or -24dbfs...

If you're recording levels are so low that the noise floor of the converters are creeping into the recording, then I can understand the argument to push the gain a little bit more... but in theory and practice I think it's poor advice to suggest that people push their levels close to 0dbfs.

Of course, I know you're the guru of gear, so if I'm off base let me know... as much as these subjects are discussed 'round the net, there still seems to be a lot of confusion. Myself included!

Old 21st June 2006
  #39
The Distressor's "daddy"
 
Dave Derr's Avatar
 

Once Again...

norman_nomad: "...but in theory and practice I think it's poor advice to suggest that people push their levels close to 0dbfs."
Actually norman, it should be just the opposite, that is, it is poor advice to NOT push levels close to 0dbfs when you can, without going "over". Your ADC manual should quite clearly state this.

For every 6dB below 0dBFS (that is, below full scale 0dB readings on your ADC), you are losing 1 bit of the full potential of your ADC. Right off the bat, you are turning a higher quality ADC into a lower quality ADC. If you are recording your peak levels 6 dB below clipping on a 20bit converter, the very best your converter can be is 19bits. Just how many bits are you willing to lose? Why would you pay for a higher resolution converter just to purposely turn it into a lower resolution one?

I have posted several other times stating the same thing in slightly different ways. The digital picture analogy was just that... an analogy, but the concept of loss of resolution due to tossing away much of the captured digital information is quite relevant.

Unless you have other problems, any decent converter will give you the best performance and resolution when ALL the bits are used, and your peak signals go very close to full scale (0dBFS). There is no reason to "purposely" leave unused headroom on converters, other than not having the time to perfectly adjust the input levels right up to just below clipping, OR... being afraid your DAW will not handle full scale signals gracefully. All ADC converter manuals should say the same thing.
Old 22nd June 2006
  #40
Lives for gear
 
norman_nomad's Avatar
Quote:
Originally Posted by Dave Derr
norman_nomad: "...but in theory and practice I think it's poor advice to suggest that people push their levels close to 0dbfs."
Actually norman, it should be just the opposite, that is, it is poor advice to NOT push levels close to 0dbfs when you can, without going "over". Your ADC manual should quite clearly state this.

For every 6dB below 0dBFS (that is, below full scale 0dB readings on your ADC), you are losing 1 bit of the full potential of your ADC. Right off the bat, you are turning a higher quality ADC into a lower quality ADC. If you are recording your peak levels 6 dB below clipping on a 20bit converter, the very best your converter can be is 19bits. Just how many bits are you willing to lose? Why would you pay for a higher resolution converter just to purposely turn it into a lower resolution one?

I have posted several other times stating the same thing in slightly different ways. The digital picture analogy was just that... an analogy, but the concept of loss of resolution due to tossing away much of the captured digital information is quite relevant.

Unless you have other problems, any decent converter will give you the best performance and resolution when ALL the bits are used, and your peak signals go very close to full scale (0dBFS). There is no reason to "purposely" leave unused headroom on converters, other than not having the time to perfectly adjust the input levels right up to just below clipping. All ADC converter manuals should say the same thing.
If what you're trying to say is that an ADC will perform better near 0dbfs because it specs better (distortion, s/n, frequency reponse, etc), then I'm willing to agree with the assertion. If what you're trying to say is that an ADC will sound better near 0dbfs simply because you're utilizing more bits then I don't agree.

Bits measure dynamic change, not fidelity and only so many bits are actually necessary to completely and accurately measure dynamic change. Less bits doesn't always = bad sound.

For reference, here’s where I gathered some of my current understandings:

And please, if I’m failing to grasp a major concept here or something please let me know. I like to feel educated on these kinds of things.



(Borrowed from this thread on the Digi forum. Link found here: http://duc.digidesign.com/showflat.p...=&fpart=4&vc=1 I hope Nika doesn’t mind me reposting this.)

First, as illogical as it seems, bits does NOT equal better resolution of anything but the noise in its signal path. More bits ONLY equals dynamic range. I have no idea what the understanding is of the various people that are following this thread, so I don't know in what amount of detail to give my answer. I'm fairly new to this particular forum, though I frequent a few others. If I offend anyone, please forgive.

First we have to talk about the difference between "quantization steps" and "bits". The number of "quantization steps" from top to bottom is defined by how many bits we have. A 1 bit signal has two quantization steps (0 and 1) a 2 bit signal has four, a sixteen bit converter has 65,536, and a 24 bit converter has ~16,000,000. Remember that half of these steps are below the zero axis and the other half are above, so to look a 24 bit signal, the audio goes from -8,000,000 to +8,000,000. Cool so far?

Let's also define "noise" really quickly. Noise=white noise. Any other form of noise is considered noise WITH SIGNAL, and you'll get different results if you treat the "apparent" noise floor as the "actual" noise floor. The actual noise floor is where the signal actually drops off into white noise only. If your signal drops off into pink noise, or other filtered noise then you have not actually hit the noise floor yet.

"Signal to noise" ratio deals specifically with white, pure, natural noise. When discussions over signal to noise ratios are brought up it is implied that the noise being spoken of is the fundamental level of white noise.

Let's talk about a sine wave with a signal to noise ratio of 42db. This will take 7 bits, or 128 quantization points to accurately capture and reproduce this sine wave, accepting that a each bit gives us 6dB of dynamic range capabilities (a whole other lecture, but a commonly accepted point. Run with me...). This means that the signal will take up all of -64 to +64 quantization steps. Now, to hopefully answer your next question, the signal, when turned up to maximum in an 8 bit converter, will indeed register from -128 to +127, thus implying more "resolution". When put into a 16 bit converter, will indeed register from ~-32,000 to ~+32,000. This is an example where more "quantization points" are used to capture this audio. Unfortunately, though, it is specifically NOT more resolution. Let me try to explain why:

Even though we have essentially 65,000 points now to describe that sinewave, it is really divided up into 128 chunks of about 512 quantization steps. This is because we know that it is a 42dB signal, and a 42 dB signal can only be divided into 128 quantization steps by definition. It'd be like trying to draw a line on graph paper with a can of spray paint. The width of the spray is only so resolved. Attempting to resolve it further has you defining more than the spray actually yielded. Making more refined graph paper isn't going to help describe that artwork you did any better. The width of the spray here represents noise, and the resolution of the graph paper represents the number of quantization steps. So back to our situation, a 42dB signal can ONLY be divided into 128 steps, period. Trying to do more than that has you defining more accuracy than the signal has.

The signal is going to fall within that 512 quantization step window, but exactly where within that window is not important because the resolution within that window of 512 quantization steps only helps to describe the white noise. But since this is white noise, it is unnecessary to describe it with precision, as any random area within that window is really noise. What this means is that the behaviour of the signal within that window of 512 quantization steps is totally random. *

So the signal will pass through all of these 128 groups of 512 quantization steps, but the fact that it does just that, and at what times it passes through these ranges is all we need to know. Exactly where within that window it passes is totally irrelevant and does not give us any better resolution of the actual signal itself.

Now if we turn the signal down 6db so that we are only using 15 bits of our 16bit system so that,even though the system is CAPABLE of 64,000 incremets from top to bottom, we're now only using 32,000 of them to describe this sine wave. We still only have 128 quantization steps for the signal itself, each of which is now divided into 256 quantization steps for the noise.

Now you can say, "what if the signal passes through that window of 512 or 256 quantization points in a very organized fashion?" Well then that would not be noise! That would be some sort of filtered noise, or not noise at all, and changes the situation entirely. We are no longer dealing with 42dB SNR. We're now dealing with some other signal to noise ratio. In other words, if the signal passes through that window of 512 quantization points dead on center then you have changed the situation. This no longer is 42dB SNR. if it really passes through dead on center then you actually have a sine wave that has more SNR than this sampling paradigm provides for (16 bits is what we're talking about), so this sine wave has a signal to noise ratio of greater than 96db, so we need to increase our bit depth until there IS a determinable resolution.

But that 42db signal is only ITSELF ever quantized into 128 discreet increments, and thus its resolution does not change no matter how many quantization points we add. As I said before, all that does is give you better resolution of a totally random noise signal.

So again, we need to be clear about using the word "resolution". The audio signal itself never has any better resolution than the minimum amount necessary to describe it, which can be ascertained by it's dynamic range (or signal to noise ratio). Increasing the bit depth does, in no way, benefit the "resolution" of the signal, and thus we try to avoid using that word to describe the effect of adding bit depth. All that adding bit depth does is allow us to accurately record material with wider dynamic range.

Again, I don't care HOW we use the word "resolution". The SIGNAL itself does not have any additional "resolution", "quantization steps", "discreet measurement increments", or any other term to describe units of measurement when we increase the bit depth beyond what is necessary to accurately describe the signal.

This all explains that bits only tells you how much total dynamic range the system is capable of. Any use of the higher number of quantization points to try to increase the audio's "resolution" is futile as it only hopes to describe with accuracy the system's random noise. This all feeds back to my point that you only need to record as hot as the dynamic range of your music allows. Any more than that is unnecessary. Thus, as I often say, "turn it down, it'll be fine! ...

Nika"
Old 23rd June 2006
  #41
The Distressor's "daddy"
 
Dave Derr's Avatar
 

Quote:
Originally Posted by norman_nomad
Again, I don't care HOW we use the word "resolution". The SIGNAL itself does not have any additional "resolution", "quantization steps", "discreet measurement increments", or any other term to describe units of measurement when we increase the bit depth beyond what is necessary to accurately describe the signal. [/I]
Actually this seems very misleading. Most music has a limited dynamic range that we actually use and hear. But this has nothing to do with accuracy or linearity as described above.

Even if a signal only has a 30dB of useful dynamic range that we need to use, the details will be lost much more with an 8 bit converter , than with a 16 bit converter. An 8 bit converter has a 48dB dynamic range but is only capable of
.39% THD if my memory is correct. Compare that to a 16 bit converter which has a theoretical linearity of .0015% THD. The 16 bit converter is 256 times more accurate in describing the signal. Distortion has many implications but to me, its how accurately the output matches the input in terms of a pure sine wave. To say you only need a converter with the dynamic range of the signal is really really misleading and confusing. You want to describe an input signal as accurately as you can. To say you only need enough bits to describe the useful dynamic range of a signal is a terrible way to think. You want the signal to come out in the same shape as it went in, not just as loud or soft as it goes.

Im just about worn out going over this, but a converter basically divides a signal into many steps, or measurements. I want just as careful of measurement as I can get with my music. To not use all the steps or measurement accuracy that a converter can provide, doesnt seem wise to me. Does it sound good to you?

Thus, if you dont use the full scale range that a converter can provide, by going right up to full scale, you are not measuring the signal as accurately as the converter can measure.

There are clipping problems in DAWS that need to be addressed, but just talking about the actual capturing of an audio signal in a converter, you must use the full range it can cover, or you are losing linearity and accuracy. Thats not to say that its not perfectly acceptable and usable if you dont go all the way to full scale, but only that YOU ARE NOT USING THE FULL CAPABILITY OF THE CONVERTER IF YOU DONT GO CLOSE TO 0dBFS.
Old 23rd June 2006
  #42
Lives for gear
 
norman_nomad's Avatar
Hey Dave. Thanks for your comments and insights. I really don't want to belabor the subject. I’ll make my comments short.

Quote:
Originally Posted by Dave Derr
Actually this seems very misleading. Most music has a limited dynamic range that we actually use and hear. But this has nothing to do with accuracy or linearity as described above.

Even if a signal only has a 30dB of useful dynamic range that we need to use, the details will be lost much more with an 8 bit converter , than with a 16 bit converter. An 8 bit converter has a 48dB dynamic range but is only capable of
.39% THD if my memory is correct. Compare that to a 16 bit converter which has a theoretical linearity of .0015% THD. The 16 bit converter is 256 times more accurate in describing the signal. Distortion has many implications but to me, its how accurately the output matches the input in terms of a pure sine wave. To say you only need a converter with the dynamic range of the signal is really really misleading and confusing. You want to describe an input signal as accurately as you can. To say you only need enough bits to describe the useful dynamic range of a signal is a terrible way to think. You want the signal to come out in the same shape as it went in, not just as loud or soft as it goes.
So you’re saying that recording with less bits will naturally introduce more THD as is the nature of the relationship between bit depth and noise/distortion. So we can assume that a vocalist recorded through a 24 bit converter at -40dbfs will less resemble the input signal than that same vocalist recorded at -4dbfs because more distortion and noise has been introduced?

Are there any simple tests I could do to validate this?

Quote:
Originally Posted by Dave Derr
Im just about worn out going over this, but a converter basically divides a signal into many steps, or measurements. I want just as careful of measurement as I can get with my music. To not use all the steps or measurement accuracy that a converter can provide, doesnt seem wise to me. Does it sound good to you?
According to Nika’s argument the larger bit depth only does a better job of describing the random noise, not the source signal as only so many “discrete increments” are needed to accurately measure amplitude change.

Again, I’m not clever enough to know how to test this…
Quote:
Originally Posted by Dave Derr
Thus, if you dont use the full scale range that a converter can provide, by going right up to full scale, you are not measuring the signal as accurately as the converter can measure.
And I guess this is the point of contention. Does bit depth determine accuracy regardless of dynamic content? From a practical standpoint I’m starting to wonder if I should be recording my Marshall stack at -3dbfs rather than the -18dbfs I’ve been practicing previously. What would you advise I do?

Quote:
Originally Posted by Dave Derr

There are clipping problems in DAWS that need to be addressed, but just talking about the actual capturing of an audio signal in a converter, you must use the full range it can cover, or you are losing linearity and accuracy. Thats not to say that its not perfectly acceptable and usable if you dont go all the way to full scale, but only that YOU ARE NOT USING THE FULL CAPABILITY OF THE CONVERTER IF YOU DONT GO CLOSE TO 0dBFS.
And I guess the question I still have: Is “capability” the same as “quality”?

I’m not trying to be argumentative, I’m just trying to get the best information.

Again, I appreciate all of the comments and insights!
Old 24th June 2006
  #43
The Distressor's "daddy"
 
Dave Derr's Avatar
 

SOURCE QUALITY Vs CONVERTER QUALITY

When one speaks of converter quality and accuracy, we are assuming an ideal input signal that has perfect quality. So of course if you are digitizing a ripping heavy guitar, chances are the S/N of the guitar is not going to be greatly affected too much even with -30dBFS signals in a 16 bit converter. There may be a very slight degradation of resolution but I doubt many would notice, nor would the converter noise be louder than the hiss coming out of the cranked Marshall.

The point here is that we are talking about getting the best accuracy on an "ideal" signal from a given converter, not actually whether it may noticeably affect some specific signal. With modern 20 - 24 bit or greater converters, most real world signals won't be noticeably harmed recording the peaks 5 or 10 dB down from full scale, especially if your recording format uses more than 16 bits to store the signal. Still, if I were recording Luciano Pavoratti, Id like to get as hot a signal into the converter as possible without clipping.

One interesting observation. Only the very VERY best analog tape decks get better performance than a decent 12 bit converter. We are talking "measurable" performance: signal to noise, distortion, dynamic range etc. As far as what sound we end up liking most, that's another ball game, and I'm not going to even start up that alley!
Old 24th June 2006
  #44
Gear Head
 

Just backing to the basic topic.

In general B is more on the FAT (distorted) then A.. and it seem louder too
But in B kick seems to disappear more also also i don't hear some details which i hear in A maybe beacuse B is overcompressed.
Anyway i like the tune but mix both in A/B is not so good imho..
Drums in general disappear.. and guitars are way too loud and when in 2 bus seems to cover the voice too much..
Maybe we should listen to a better mixed version to appreciate it more the fatso
Old 24th June 2006
  #45
Registered User
 

all i know is that when i stopped recording as hot as possible everything started to sound better.
Old 24th June 2006
  #46
Lives for gear
 
Agzilla's Avatar
 

..

"all i know is that when i stopped recording as hot as possible everything started to sound better."


Same here, in a major way.... i don't know all the tech talk and jargon but i do know what i hear and what my clients feedback suggests....

Maybe as long as your happy with your results it's okay, maybe different scenerio's and collections of frequencies..ie. different music needs different solutions..

Maybe, one size does not fit all after all...

I watched Charles Dye Mix it like a Record, found it interesting and it gave me some good ideas, but some of his ideas definately do NOT work for the music i make..

It does not invalidate the methods though...

"Be like water...... " Bruce Lee
Old 4th October 2006
  #47
Lives for gear
 

Quote:
Originally Posted by briefcasemanx View Post
all i know is that when i stopped recording as hot as possible everything started to sound better.

Well done! Thank heavens someone is listening :-)

I have read this latest 'resolution thread' with utter dismay - I've written enough about this to sink a battle ship over the last 5 years. I hope people will forgive me for not having the patience for going over it all again :-(
Old 5th October 2006
  #48
Gear Addict
 
huarez's Avatar
 

Thanks for sharing the files! There is an audible difference in level between the two mixes. I find the A less stressing and B nearly unenjoyable. Both hurt the ears. Song is nice . My two cents
Old 5th October 2006
  #49
Lives for gear
 
camus's Avatar
 

Good thread on "recording levels" at Tapeop:

http://messageboard.tapeop.com/viewtopic.php?t=38430
Old 5th October 2006
  #50
Gear Guru
 
AllAboutTone's Avatar
 

i guess i have to be the RAT in the bunch, first of all A sounds the best, its louder and its not a volume thing, B sounds like it has the middle sucked out of it,........ other than that, i think you should pick another song to sample by, the entire mix was damn right cold and no depth what so ever, bad bad mix to be doing a 2 buss sample, everything sounded bad, how are you suppose to decide on a great 2 buss compressor when the root (mix) started off bad, so damn digital........let me guess probally another pro tool mix...YIKES

PS: Also the B mix is boncing from side to side, not sure if you were in stereo mode or dual mono but it did not sound well over headphones, its was jerking my ears one to the other, not sure but maybe it was in the low freq.
Old 7th October 2006
  #51
Gear Head
 
deadhorses's Avatar
 

Hahaahaha
Horatio Sanz: (music starts)
I don't care that tomorrow is Easter
Christmas is number one.
I don't care about colored eggs
Christmas toys are more fun.
I don't care about marshmallow peeps
The Cadbury bunny gives me the creeps.
I wish it was Christmas today
262 days away.
(Music stops)

I’ve heard it all! Someone actually took the sketch comedy of Saturday night live! The rhythm with added distortion minus the casio keyboard!! WE are all laughing, too funny!
http://snltranscripts.jt.org/03/03qeaster.phtml



I wish it was Christmas today
I wish it was Christmas today
I wish it was Christmas today


Cheers
Old 7th October 2006
  #52
Lives for gear
 
max cooper's Avatar
 

Quote:
Originally Posted by u b i k View Post
digital resolution aside, my experience is that the analog stages of converters don't do as well when hit with the kind of levels that approach 0dbfs.
I thought I was losing my mind.

Here too, with two diff. converters.

I haven't taken a moment to investigate, just worked a tad lower.
Old 7th October 2006
  #53
Lives for gear
 
max cooper's Avatar
 

Quote:
Originally Posted by Paul Frindle View Post
Well done! Thank heavens someone is listening :-)

I have read this latest 'resolution thread' with utter dismay - I've written enough about this to sink a battle ship over the last 5 years. I hope people will forgive me for not having the patience for going over it all again :-(
How about a link?

Always like to read your stuff, Paul. Don't know that I've caught your take on this topic.
Old 9th October 2006
  #54
Lives for gear
 

Quote:
Originally Posted by max cooper View Post
How about a link?

Always like to read your stuff, Paul. Don't know that I've caught your take on this topic.
Sorry - I was getting all het up by the continued notion some people have that you somehow lose bits when you turn level down :-( I know (or at least hope)that most people have now finally abandonned this erroneous notion..

I have posted scores of times about the misapprehensions that people have been encouraged to believe that can damage the work they are doing etc..
Old 12th October 2006
  #55
Lives for gear
 

Quote:
Originally Posted by norman_nomad View Post
If what you're trying to say is that an ADC will perform better near 0dbfs because it specs better (distortion, s/n, frequency reponse, etc), then I'm willing to agree with the assertion. If what you're trying to say is that an ADC will sound better near 0dbfs simply because you're utilizing more bits then I don't agree.

Bits measure dynamic change, not fidelity and only so many bits are actually necessary to completely and accurately measure dynamic change. Less bits doesn't always = bad sound.

For reference, here’s where I gathered some of my current understandings:

And please, if I’m failing to grasp a major concept here or something please let me know. I like to feel educated on these kinds of things.



(Borrowed from this thread on the Digi forum. Link found here: http://duc.digidesign.com/showflat.p...=&fpart=4&vc=1 I hope Nika doesn’t mind me reposting this.)

[I]First, as illogical as it seems, bits does NOT equal better resolution of anything but the noise in its signal path. More bits ONLY equals dynamic range. I have no idea what the understanding is of the various people that are following this thread, so I don't know in what amount of detail to give my answer. I'm fairly new to this particular forum, though I frequent a few others. If I offend anyone, please forgive.
--snipped for brevity--
This long post about resolution and bits including extracts from Nika is great but such explainations are excessively long winded.

The whole thing can be summed up very simply by pointing out that the random element of any dithered signal turns quantisation steps into a continuous noise floor and statistically removes (smooths over) all the hard boundaries.

Therefore a properly constructed digital system (with dither) does not exhibit quantisation distortion, regardless of how small the signal level may be..

This means that the concept of 'resolution of a digital signal' is completely false and irrelevant..

The apparent quantistion you see on your editting screens is only a displayed artefact of looking at undecoded sample values rather than actual decoded signal (simply because decoded signal is not available within your workstation app) - therefore it does NOT represent what will exit your system from the DAC..
Old 12th October 2006
  #56
Motown legend
 
Bob Olhsson's Avatar
 

Let me just add that since none of us has a wayback machine to set lab-perfect A to D levels, we must always allow for errors in the real world. A lot of us have found that it's best to error on the low side rather than the high side for many reasons.
Old 15th October 2006
  #57
The Distressor's "daddy"
 
Dave Derr's Avatar
 

SERIOUS MISINFORMATION

WOW. I GIVE UP.... AGAIN!

I'D LOVE TO PLACE A HUGE "LIFE-ALTERING" BET ABOUT MY ASSERTIONS ON LOSS OF BIT RESOLUTION AND QUALITY WHEN RECORDING BELOW, OR WAY BELOW FULL SCALE.

TRY RECORDING A PERFECT SIGNAL 90DB DOWN AND LET ME KNOW WHAT YOU HEAR. Its all a matter of degree. 90dB down should still give you 8 bits on a 24bit converter, BUT YOU WILL NOT LIKE WHAT YOU HEAR NO MATTER WHAT DANGED DITHER YOU USE.

AND IF YOU WRITE SOFTWARE THAT CAN'T HANDLE FULL SCALE DIGITAL AUDIO SIGNALS... SHAME ON YOU!


Old 16th October 2006
  #58
Lives for gear
 
gsharp's Avatar
 

Quote:
Originally Posted by Dave Derr View Post
WOW. I GIVE UP.... AGAIN!

I'D LOVE TO PLACE A HUGE "LIFE-ALTERING" BET ABOUT MY ASSERTIONS ON LOSS OF BIT RESOLUTION AND QUALITY WHEN RECORDING BELOW, OR WAY BELOW FULL SCALE.

TRY RECORDING A PERFECT SIGNAL 90DB DOWN AND LET ME KNOW WHAT YOU HEAR. Its all a matter of degree. 90dB down should still give you 8 bits on a 24bit converter, BUT YOU WILL NOT LIKE WHAT YOU HEAR NO MATTER WHAT DANGED DITHER YOU USE.

AND IF YOU WRITE SOFTWARE THAT CAN'T HANDLE FULL SCALE DIGITAL AUDIO SIGNALS... SHAME ON YOU!


I'll bet you a FATSO (cuz I need another one) that you can record a source that only has 60dB of dynamic range, for example, well below full scale on a 24 bit converter without losing any information necessary to faithfully reproduce it. If what you assert were true the perceived sound quality in recordings would fluctuate as the instruments played louder and softer.
Old 16th October 2006
  #59
Lives for gear
 
minister's Avatar
yeah, Dave Derr makes great gear, but don't follow his advice on Digital Levels.

one reason is your system should be calibrated to -20, -18, -16 or -14dBFS = 0VU and if you record up to -3dBFS, you are over driving your anolog components. add that up over a mix and it is very harsh on the ears. another reason is that the whole argument about resolution loss in 24 bit is a myth as many, including Paul Frindle, have demontrated these last few years. even signals of -3dBFS going into plug-ins have problems being reconstructed properly.

keep you levels DOWN in digital and everything will sound much better and less harsh and your mixes won't collapse in mastering. bring up levels all the way at the END of the line.

as the SONY DYNAMICS on a 2-BUS v a hardware comp. i have to say that as excellent as the OXOFRD DYNAMICS is (and i use it all the time), my Portico 5043 made it sound mushy by comparison.
Old 16th October 2006
  #60
The Distressor's "daddy"
 
Dave Derr's Avatar
 

HA HAAA

I think its best if you dont listen to me either! Theres obviously too many experts already.

Just dont place your money on my assertion that for a perfect source, every 6dB you record below full scale loses you 1 bit off a converters resolution. Record 20dB down from full scale, and you have turned a 20 bit converter into a 16 bit converter.

You may as well argue with me about the laws of gravity. Whether you like the sound, or your plugins distort, or your gain trim somewhere is clipping is not the issue.

If your software can't handle full scale signals, Id write your congressman, although complaining to the guys who wrote the software might be more effective

As immutable as the laws of nature, taxes, and death, converters will only produce their advertised number of bits if you record up to full scale.

By the way, YOU DO KNOW THAT ALL YOUR CD's ARE RECORDED UP TO FULL SCALE SO YOU GET THE MAXIMUM QUALITY OUT OF THEM AND YOUR CD PLAYERS CONVERTERS? ESPECIALLY ROCK RECORDS. MASTERING GUYS MAKE SURE THEY USE EVERY SINGLE LAST DB OF HEADROOM TO SQUEEZE THE BEST OUT OF YOUR CD PLAYERS. I would not go trying to convince them that their records would sound better 6dB below clipping!
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Similar Threads
Thread
Thread Starter / Forum
Replies
XHipHop / Music Computers
20
Ghost Logic / Gearslutz Secondhand Gear Classifieds
0
benelli / Gearslutz Secondhand Gear Classifieds
0
Dutchmuzik / Gearslutz Secondhand Gear Classifieds
0

Forum Jump
Forum Jump