View Single Post
Old 22nd May 2019
Gear Head

Originally Posted by Jay Rose View Post
Manufacturers eventually developed Dolby which hides most of the hiss by first encoding the high frequencies, then decoding them on playback. As a result most commercial recordings had no hiss at all by the time it was burned to an LP. Why on earth would anyone want that noise in their recordings is beyond me.

Manufacturers didn't develop Dolby. One specific manufacturer -- Dolby Labs -- did. They sold hardware that used four fixed bands of codec to pro users, and licensed a simplified sliding HF-only implementation to various cassette deck makers.

In both setups, the decoder had to be calibrated to the encoder. Pro units (and Advent's standalone consumer processor) had a tone generator, meter, and calibration knob for this purpose. Most cassette decks hardwired the "calibration" and didn't let the user tweak.

It wasn't magic. It relied on masking: Loud signals (when properly calibrated) were passed unchanged, on the theory that they'd hide noise from the tape record/play cycle. But the noise was still there, and could interfere with subtle sounds in nearby bands that weren't being masked. If you looked on a scope, you could see the noise riding even the louder signals. But the ear could ignore it.

Soft signals were boosted, which meant the tape noises would be relatively lower than they'd be for an unboosted signal.

Of course the boosted signals had to be lowered to their original volume on playback (lowering system noise at the same time), and the knee where the system shifted from boost to linear had to be calibrated. Otherwise there'd be odd dynamic effects.

In other words, Dolby at the best of times had its own unique sound. Better than tape hiss, but with other sacrifices that people didn't mind as much. About 20 years after pro Dolby A was released, they came up with Dolby SR. It used four sliding bands for more precise masking.

FWIW, dbx released a single-band codec system that didn't require calibration. Dolby and dbx were not compatible with each other.

Philosophy: Analog Dolby and dbx didn't reduce noise. They relied on both psychoacoustic masking to make noise less noticeable, and temporary boosting to make low signals louder compared to noise. The played back signal wasn't identical to the original, but had modulated noise patterns that folks didn't mind. In other words, analog NR was very similar to what MPEG compression does in the digital domain. Just with a lot fewer bands, and some volume tweaking for the channel rather than digital bit reduction (hide low bit depth noise where you wouldn't hear it) and zip-like compression of the result.
I am *just a little bit* of an expert on DolbyA -- actually writing DolbyA compatible decoder that does better than a real DolbyA.

Everything written above about DolbyA is true, but the idea of 'modulated noise patterns' could be taken a step further... DolbyA produced a lot of intermodulation distortion due to the fast gain control. Even in the best of times, the very subtle, ingenius DolbyA compressor design (used in inverse for decoding) still produced a significant amount of IMD -- even after decoding.

When listening to fully decoded material, the IMD was most noticeable at high frequencies -- causing a fuzz or veil (not to be confused with noise modulation directly). Also, certain kinds of complex sharp transients (like cymbals) were blunted. This was due to a few things, but all-in-all the biggest problem were the modulation products produced -- and inability to undo the modulation products during decoding. (It seems like the feedback design also didn't fully undo the dynamics of the encoding -- still looking into that.)

On a 'tilting at windmills' quest - I have written a very precise, much-less-imd DA decoder in software (previously not thought to be possible) -- and it is amazing about the quality improvement... A lot of old recordings are now able to be more completely recovered. (I am NOT touting the decoder, but meaning to describe what happens in the DolbyA encode/decode cycle.)

DolbyA encode/decode does have a 'sound' -- some engineers realized it even back in the 1960s, and using DolbyA was a Faustian bargain (noise vs certain kinds of quality.) There were limits as to what the hardware could do (hilbert transforms and dynamic attack/release filtering were just not practical.) The software version does a LOT of math to stash the modulation products in a way that they occur when least audible (basically, the *unwanted* modulation products are suppressed by doing the modultion at a different time -- tricky stuff.)

Anyway -- too many details...

But, to emulate the sound of professional recording -- using the typical technology of the day -- I'd expect that on average, the DolbyA sound is more impactful than the 'sound' of a tape recorder.

The DolbyA distortion has some similar characteristics as tape distortion -- it tends to soften and blunt the more intense details. A true DolbyA (not my decoder) can leave a veil of intermod that happens coincedental with intense tones (I can show examples). Decoding the material with the distortion removed (that is possible in software), the 'veil' disappears, and all that is left is the dminished tape hiss. (The distortion veil is NOT hiss.)

(I started describing all of the DolbyA distortion characteristics, then realizing that it wouldn't be helpful... So decided not to post unless asked)

Bottom line, a lot of older professional stuff was done with DolbyA. Whatever IMD caused compression that there was due to 'tape', there was even more distortion caused by the 'DolbyA' encode/decode cycle.

A caveat: usually, people will use the artificial 'sounds like' distortions to 'sound good'. Sometimes truly emulating the distortion isn't the best choice -- DolbyA distortion (if you have ever carefully listened to ABBA -- there is significant DolbyA distortion in there) doesn't always sound good -- but it was in a lot of old recordings.