View Single Post
Old 6th January 2019
Lives for gear
Analogue Mastering's Avatar

There are 2 other variables in the mix that renders this whole test invalid:
1. The DAC > we don’t know what merit to award to the DAC as it always needs an ADC to record it back into the PC resulting file.

2. the ADC > we don’t know what merit to award to the ADC as it always needs an DAC to PLAY it back in order to record a result file.

The test premise asumes that we know “what good looks like” while in reality we don’t, to explain

If a DAC and ADC are “+1” as a result file we don’t know if
Dac was 0 and ADC 1
Dac was -1 and ADC 2
Dac was 1 and ADC 0
Dac was 2 and ADC -1

We don’t know, hence the result is meaningless, you CAN not award ANY quality to individual components from this test.

The best way to describe this test is that a certain combination of ADC and DAC provides the deepest correlation (meaning most true to the source, assuming that the right things are measured in the first place)

This then reduces this test to a “brute force” mix and match combination fest, where in theory an onboard AC97 DAC with a Behringer ADC might give you the “best” result.
The “results” posted are the sum of DAC, ADC, cabling, local interference, impedance, thd, gain staging skills etc. > meaningless and not valid to award ANY qualities to individual components.

Originally Posted by capn357 View Post
This may just be semantics issue, but I wouldn't characterize Audio Diffmaker as inherently prioritizing one aspect (e.g. linear phase vs. minimum phase) of the conversion over another. Rather, I would say that certain aspects of the conversion, or in this case, certain filters have a greater contribution than others when it comes to affecting the transparency of the conversion.

Moreover, since I believe Didier is ultimately relying upon the Matlab script for the actual results that are published in the ranked list, it is that script that is most relevant.

Here is my understanding of what is incorporated in that script and the underlying premise (I'm sure if I'm mistaken it will be instantaneously called out)

The underlying premise is that if the DAC and ADC combination are perfectly transparent, there are ONLY two differences that should be observed in the recorded .wav file that is produced by this test protocol:
1. The resultant .wav file could be shifted in time from the original .wav file, and
2. The resultant .wav file could have some offset in amplitude (perfectly flat across frequency) based upon level settings used.

Consequently, there are only two adjustments that are performed in the matlab script, one to align the time of the recorded and original .wav files and one to align the amplitude of the recorded and original .wav files.

Again, if the DAC/ADC combination are perfectly transparent, then the two files should be exactly the same after these two adjustments are made. ANY phase distortion and/or ANY variation of magnitude vs. frequency of the conversion will result in a less than perfect match, which is all that this test is attempting to characterize. Certainly it is reasonable to assume that all of the filters available in the ADI-2 are imperfect and it is also reasonable to assume that these filters won't all have the same affect on the ultimate transparency of the conversion as measured by this script.

Now, how all of these less-than-perfect conversions translate into perceived sound quality, is of course another matter entirely and one that will be debated endlessly. Ten thousand years from now, when the humans have succeeded to destroy most all of the life on earth except for cockroaches, the cockroaches will be arguing about the relative merits of vinyl vs. digital and comparing the size of their DACs.