Using Dolby with digital recording
jlaber
Thread Starter
#1
22nd October 2011
Old 22nd October 2011
  #1
Gear nut
 
jlaber's Avatar
 
Joined: Feb 2009
Posts: 78

Thread Starter
Using Dolby with digital recording

OK. I expect some knee-jerk reactions to the subject but I've been curious for many years about this. Obviously, noise isn't quite the problem in the digital realm as it is with analog tape. But I'm eyeing up some Dolby racks for use with my analog decks, and I'll be trying it on digital as well.

Why? Primarily out of curiosity to see if it has a cool effect. I doubt it would sound cool on everything, but maybe certain things, particularly if tweaked out of cal to get more compression or expansion.

But I see it this way: short of using an 1176 on every track, brick-wall limiting or Apogee's built-in soft limit, us analog geezers had to get accustomed to using extreme caution when tracking to digital to prevent nasty clipping. I can see how this may have led to the over-compressed revolution; louder-is-better. If I understand correctly, Dolby A compresses equally at all frequencies. If the attack time is fast enough, would this not help us to prevent digital clipping without sacrificing bits? Let's face it, and please excuse my ignorance if pre-emphasis or other techniques already take care of this, but at 6 dB down, you've already lost half the resolution, bit-wise. I've read technical discussions where the math is not exactly 6 dB = half the bits, but still it's in the ballpark.

So, my hope is that if I use Dolby A, or maybe even SR, that I can get hotter levels to the converter inputs, preserving resolution while reducing the chances of clipping. Once decoded, the dynamics would be restored, with a bit of side-effect that just might sound a little vintage. I'm also curious about using plain ole pre-emphasis and de-emphasis like RIAA for vinyl or NAB for tape. The phase shift artifacts could contribute to a more tape-like sound. Combined with dolby, maybe even more so.

Question is, I'm sure it's been tried, so has anyone posted on this before? I did a search, and nothing jumped out at me regarding the subject.

If you've tried it, please post here. Theories are welcome of course, discussion is healthy. Maybe this might spark some interest and some of you will give it a try.

So, happy posting. I hope I don't start WW III here...
#2
22nd October 2011
Old 22nd October 2011
  #2
Gear Guru
 
theblue1's Avatar
 
Joined: Mar 2005
Location: Long Beach, CA
Posts: 21,350

Quote:
Originally Posted by jlaber View Post
OK. I expect some knee-jerk reactions to the subject but I've been curious for many years about this. Obviously, noise isn't quite the problem in the digital realm as it is with analog tape. But I'm eyeing up some Dolby racks for use with my analog decks, and I'll be trying it on digital as well.

Why? Primarily out of curiosity to see if it has a cool effect. I doubt it would sound cool on everything, but maybe certain things, particularly if tweaked out of cal to get more compression or expansion.

But I see it this way: short of using an 1176 on every track, brick-wall limiting or Apogee's built-in soft limit, us analog geezers had to get accustomed to using extreme caution when tracking to digital to prevent nasty clipping. I can see how this may have led to the over-compressed revolution; louder-is-better. If I understand correctly, Dolby A compresses equally at all frequencies. If the attack time is fast enough, would this not help us to prevent digital clipping without sacrificing bits? Let's face it, and please excuse my ignorance if pre-emphasis or other techniques already take care of this, but at 6 dB down, you've already lost half the resolution, bit-wise. I've read technical discussions where the math is not exactly 6 dB = half the bits, but still it's in the ballpark.

So, my hope is that if I use Dolby A, or maybe even SR, that I can get hotter levels to the converter inputs, preserving resolution while reducing the chances of clipping. Once decoded, the dynamics would be restored, with a bit of side-effect that just might sound a little vintage. I'm also curious about using plain ole pre-emphasis and de-emphasis like RIAA for vinyl or NAB for tape. The phase shift artifacts could contribute to a more tape-like sound. Combined with dolby, maybe even more so.

Question is, I'm sure it's been tried, so has anyone posted on this before? I did a search, and nothing jumped out at me regarding the subject.

If you've tried it, please post here. Theories are welcome of course, discussion is healthy. Maybe this might spark some interest and some of you will give it a try.

So, happy posting. I hope I don't start WW III here...
First, I think you've 'apologized' in advance for any lack of knowledge, so, hopefully that will provide you some insulation here...

Moving right along, you seem to have a fundamental misunderstanding about 'resolution' in digital audio.

You don't 'lose resolution' by moving away from 0 dB FS ('digital zero'). You simply move your signal closer to the digital noise floor.

Since the that noise floor with 24 bit digital sample word lengths is around 140 dB below 0 dB FS and the analog components in your chain certainly have a much higher noise floor (probably more like in the range of 80-110 dB S/N for your chain even with optimal gainstaging), you shouldn't have to worry about dipping significant signal into the digital noise floor -- even if you stay a very comfortable 18 to 12 dB under 0 dB FS.


Since the problem you sought to address by using Dolby isn't really a problem, my thoughts on various Dolby NR schemes are probably irrelevant, but I will say this: there seems little point in using such systems, which have their own negatives and tradeoffs, if you don't need to. And, with regard to your concerns above, you don't.

With regard to using Dolby as an 'effect' -- well, what the heck, go ahead and experiment, you might find you like it. Anything is possible.
jlaber
Thread Starter
#3
22nd October 2011
Old 22nd October 2011
  #3
Gear nut
 
jlaber's Avatar
 
Joined: Feb 2009
Posts: 78

Thread Starter
Quote:
Originally Posted by theblue1 View Post
First, I think you've 'apologized' in advance for any lack of knowledge, so, hopefully that will provide you some insulation here...

Moving right along, you seem to have a fundamental misunderstanding about 'resolution' in digital audio.

You don't 'lose resolution' by moving away from 0 dB FS ('digital zero'). You simply move your signal closer to the digital noise floor.

Since the that noise floor with 24 bit digital sample word lengths is around 140 dB below 0 dB FS and the analog components in your chain certainly have a much higher noise floor (probably more like in the range of 80-110 dB S/N for your chain even with optimal gainstaging), you shouldn't have to worry about dipping significant signal into the digital noise floor -- even if you stay a very comfortable 18 to 12 dB under 0 dB FS.


Since the problem you sought to address by using Dolby isn't really a problem, my thoughts on various Dolby NR schemes are probably irrelevant, but I will say this: there seems little point in using such systems, which have their own negatives and tradeoffs, if you don't need to. And, with regard to your concerns above, you don't.

With regard to using Dolby as an 'effect' -- well, what the heck, go ahead and experiment, you might find you like it. Anything is possible.
OK, thanks, but I'm not clear on why you don't lose resolution at lower signal levels. If there is info on the Web that explains this, please point me there and I will do my homework.

I can see where you don't lose any sample-rate resolution. My understanding, or mis-understanding applies to amplitude vs number of bits to represent a waveform at a given amplitude.

The misunderstanding may arise from encoding techniques, pre-emphasis, or signal processing that I may not be aware of, but nothing in the past when I researched A/D conversion for hi-fi audio jumped out at me compared to basic A/D conversion as it was in the beginning with, say digital telephony in the '60s, digital oscilloscopes, or even digitizing DC signals for measurement, like a DMM or industrial PLC.

In the most basic digitizing scheme, I will take a sine wave for example.

Assume the following:

* An 8-bit converter, for simplicity's sake, with 256 discrete levels represented by the range of numbers 0 to 255.

* A "zero-signal" bias level of 127, allowing both positive and negative sides of the incoming signal to be digitized.

If the incoming level is digitized to the maximum level before digital clipping, assumed 0 dB here, then there will be 127 discrete steps below zero and 127 discrete steps above zero representing the entire amplitude of the waveform. This works out to 254 non-zero discrete levels, plus one discrete level representing the zero-signal bias level, which adds up to 255 discrete steps overall.

Now take the extreme case where the incoming signal level is so low that the discrete steps representing the entire sine wave from top to bottom are one step above zero and one step below zero. This is where I assume we have lost resolution, since we now only have 3 discrete levels representing the sine wave; zero (127), minus one (126) and plus one (128). The resulting un-filtered wave is now basically a square wave.

In theory, yes, it's possible to filter the square wave to make it into a sine wave, but if the sampling rate is 44.1 KHz, and the waveform recorded is at 20 Hz, the result still won't be a sine wave, considering the filtering takes effect at frequencies above 20 KHz on a typical D/A converter.

Now, if all of this is moot due to encoding techniques or other forms of DSP in use across the broad range of conversion techniques (PCM, 1-bit, oversampling, etc) then I am not aware of how an incoming signal that is so low that it has 3 discrete steps would not have a loss in resolution compared to the maximum signal level that is represented by 255 discrete levels.

So that's how I'm basing my theory of lost resolution as signal levels are reduced below 0 dB, full-scale, max signal before digital clipping. I am open to being educated here if I am totally off track.
jlaber
Thread Starter
#4
22nd October 2011
Old 22nd October 2011
  #4
Gear nut
 
jlaber's Avatar
 
Joined: Feb 2009
Posts: 78

Thread Starter
Again, the technical discussion is healthy. Main question though, is has anyone tried it, did it help to avoid digital clipping without further limiting, and most of all, does it produce a cool effect, at least cool for some things.
#5
22nd October 2011
Old 22nd October 2011
  #5
Lives for gear
 
Ben B's Avatar
 
Joined: Aug 2007
Location: USA
Posts: 1,532

jlaber
Thread Starter
#6
22nd October 2011
Old 22nd October 2011
  #6
Gear nut
 
jlaber's Avatar
 
Joined: Feb 2009
Posts: 78

Thread Starter
Quote:
Originally Posted by Ben B View Post
Here's a link to an article that might shed some light on this topic:

http://www2.uic.edu/stud_orgs/prof/p...rt_3_rev_F.pdf

-Ben B
Thanks, Ben.

Looking at this and other PCM articles refreshes my memory in digital communications theory in electronics school. I see how logarithmic conversion can put more resolution into lower level signals but it seems to be generally stated on the Web that linear encoding is used for CDs, computer audio files, etc. PCM, as I was taught and as I see it described on the Web is an encoding format. Once the audio is converted, it is encoded so that once the data is transmitted, or stored and recalled, the A/D converter can properly synchronize for playback. This would be especially important for stereo encoding into a single digital stream.

Still, I could be missing something here. But it's just not staring me in the face, in fact I still see mention of low-level signals losing resolution, supporting the need to keep signals as hot as possible going in to the converter. I admit, in the 24-bit world, even 16 bit, it's not blatently audible, but understanding the theory doesn't hurt, as long as we don't get carried away in applying it.

When I'm released from the hospital, I'll do some bench tests on various converters, and see what a super-low-level sine wave looks like when played back after A/D/A conversion. My test equipment should be able to cut it without introducing substantial noise. I can see how over-sampling can interpolate, and maybe make an otherwise square wave triangular, but I don't see it recovering a sine wave at extremely low levels.

Till then, happy AES-ing in NYC.
#7
23rd October 2011
Old 23rd October 2011
  #7
3 + infractions, forum membership suspended.
 
Joined: Jun 2011
Location: at home
Posts: 2,401

Quote:
Originally Posted by jlaber View Post
OK. I expect some knee-jerk reactions to the subject but I've been curious for many years about this. Obviously, noise isn't quite the problem in the digital realm as it is with analog tape. But I'm eyeing up some Dolby racks for use with my analog decks, and I'll be trying it on digital as well.

Why? Primarily out of curiosity to see if it has a cool effect. I doubt it would sound cool on everything, but maybe certain things, particularly if tweaked out of cal to get more compression or expansion.

But I see it this way: short of using an 1176 on every track, brick-wall limiting or Apogee's built-in soft limit, us analog geezers had to get accustomed to using extreme caution when tracking to digital to prevent nasty clipping. I can see how this may have led to the over-compressed revolution; louder-is-better. If I understand correctly, Dolby A compresses equally at all frequencies. If the attack time is fast enough, would this not help us to prevent digital clipping without sacrificing bits? Let's face it, and please excuse my ignorance if pre-emphasis or other techniques already take care of this, but at 6 dB down, you've already lost half the resolution, bit-wise. I've read technical discussions where the math is not exactly 6 dB = half the bits, but still it's in the ballpark.

So, my hope is that if I use Dolby A, or maybe even SR, that I can get hotter levels to the converter inputs, preserving resolution while reducing the chances of clipping. Once decoded, the dynamics would be restored, with a bit of side-effect that just might sound a little vintage. I'm also curious about using plain ole pre-emphasis and de-emphasis like RIAA for vinyl or NAB for tape. The phase shift artifacts could contribute to a more tape-like sound. Combined with dolby, maybe even more so.

Question is, I'm sure it's been tried, so has anyone posted on this before? I did a search, and nothing jumped out at me regarding the subject.

If you've tried it, please post here. Theories are welcome of course, discussion is healthy. Maybe this might spark some interest and some of you will give it a try.

So, happy posting. I hope I don't start WW III here...
dolby wont work on digital
what exactly are you going to do

you can dolby the analog recording
de-dolby it on playback but so what

dont even think of going analog dolby
then to digital
diddle the digital with fx
and expect the de-dolby to work on/after the d/a
#8
23rd October 2011
Old 23rd October 2011
  #8
3 + infractions, forum membership suspended.
 
Joined: Jun 2011
Location: at home
Posts: 2,401

Quote:
Originally Posted by jlaber View Post
OK, thanks, but I'm not clear on why you don't lose resolution at lower signal levels. If there is info on the Web that explains this, please point me there and I will do my homework.

I can see where you don't lose any sample-rate resolution. My understanding, or mis-understanding applies to amplitude vs number of bits to represent a waveform at a given amplitude.

The misunderstanding may arise from encoding techniques, pre-emphasis, or signal processing that I may not be aware of, but nothing in the past when I researched A/D conversion for hi-fi audio jumped out at me compared to basic A/D conversion as it was in the beginning with, say digital telephony in the '60s, digital oscilloscopes, or even digitizing DC signals for measurement, like a DMM or industrial PLC.

In the most basic digitizing scheme, I will take a sine wave for example.

Assume the following:

* An 8-bit converter, for simplicity's sake, with 256 discrete levels represented by the range of numbers 0 to 255.

* A "zero-signal" bias level of 127, allowing both positive and negative sides of the incoming signal to be digitized.

If the incoming level is digitized to the maximum level before digital clipping, assumed 0 dB here, then there will be 127 discrete steps below zero and 127 discrete steps above zero representing the entire amplitude of the waveform. This works out to 254 non-zero discrete levels, plus one discrete level representing the zero-signal bias level, which adds up to 255 discrete steps overall.

Now take the extreme case where the incoming signal level is so low that the discrete steps representing the entire sine wave from top to bottom are one step above zero and one step below zero. This is where I assume we have lost resolution, since we now only have 3 discrete levels representing the sine wave; zero (127), minus one (126) and plus one (128). The resulting un-filtered wave is now basically a square wave.

In theory, yes, it's possible to filter the square wave to make it into a sine wave, but if the sampling rate is 44.1 KHz, and the waveform recorded is at 20 Hz, the result still won't be a sine wave, considering the filtering takes effect at frequencies above 20 KHz on a typical D/A converter.

Now, if all of this is moot due to encoding techniques or other forms of DSP in use across the broad range of conversion techniques (PCM, 1-bit, oversampling, etc) then I am not aware of how an incoming signal that is so low that it has 3 discrete steps would not have a loss in resolution compared to the maximum signal level that is represented by 255 discrete levels.

So that's how I'm basing my theory of lost resolution as signal levels are reduced below 0 dB, full-scale, max signal before digital clipping. I am open to being educated here if I am totally off track.
maybe you should learn the theory completely
before trying to come up with original improvements
then you would KNOW if they would work or not

you have 16/24/32/64 bits representing the signal
no matter what the signal level you still have 16/24/32/64 bits
the only question is which bits are 0 and which are 1

YOU DO NOT LOSE ANY RESOLUTION

if ALL you did was analog>dolby?a/d>d/a>analog
you would get back what you had so why bother

BUT if you diddle the digital in any way
you effup the final d/a for dolby purposes
and dolby will merely have been the same as eq applied originally that you can not now un-eq
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
 
Register FAQ Search Today's Posts Mark Forums Read

SEO by vBSEO ©2011, Crawlability, Inc.