Originally Posted by Michael Carnes
I've experimented with some dirty stuff (I didn't die--I'm just not at Lex) and found it to be pretty challenging. The real things that add the dirt are not always obvious and seem to rely on a somewhat 'compromised' data pathway from start to finish.
My guess is that a lot of the dirt stems from some of the coefficient quantization issues that were described by Oppenheim & Schafer back in 1975. A fixed point filter will have noise and distortion. The 1st order filters used by the older algorithms tend to be cleaner than 2nd order filters, but a little bit of noise, times a few hundred passes through a delay feedback loop, can result in a lot of noise.
A possibility: This noise would definitely be different between output channels. I know that adding some bandpass filtered noise to the output channels, with decorrelated noise sources for each channel, can create a wider stereo image if used subtly. Maybe this coefficient quantization noise adds "depth" to the hardware reverbs. Or maybe not. Just thinking as I type here.
Floating point just doesn't lend itself to that sort of stuff. Don't get me wrong--you can add schmutz, but it doesn't sound the same. I'm probably not going to spend any more time on that experiment for quite some time.
Program everything in fixed point MMX or SSE, and you could probably get the schmutz back. You would need to do a lot of bitwise operations to get the wordlengths and saturation headroom correctly. I've thought about doing this, then thought "nah."
I've programmed things on a fixed point DSP that is fairly close to the fixed point Lexicon hardware (the Spin Semiconductor FV-1), and I don't hear any particular "magic" versus the floating point that makes moving over to fixed point worthwhile. Maybe I need to listen to this with fresh ears.