The No.1 Website for Pro Audio
Evaluating AD/DA loops by means of Audio Diffmaker
Old 27th February 2020
  #1891
Lives for gear
 
Analogue Mastering's Avatar
You’re right, mine was a rough guesstimate, but still 3%-4% in the same product is a lot no?
Old 27th February 2020
  #1892
Gear Maniac
 

Quote:
Originally Posted by madriaan View Post
You guys sound pissed about the performance of your prized converters.
Couldn’t care less about this test. It’s totally pointless and done without method by a bunch of random people on the internet.

Worse they are compiling lots of numbers and playing pretend testing scientist proclaiming the test means something. This test is useless and the results mean nothing. That is just a matter of fact based on how it’s being conducted.

I’m here to set the record straight.

People are misleadingly quoting and believing that this test can be measure of interface quality. Look how many people are drinking the cool aid. 64 pages now.

They are then taking these results to other threads saying look how good this converter is or how shtty this one is all quoting this pointless test.

Then uneducated people or those without the critical thinking skills to evaluate the test or it’s methods start jumping in on the band wagon.

Cue GS User 1 - “da guy on da Gearslutz tested all 1000 interfaces in the world and this one is da bestest”

GS User 2 - “yea dat one is ranked 130 and is a piece of sht look at da test da guy did”

GS User 3- “yea all your interfaces are sht motu units from 1994 are da best look at the test this guy did it’s amazing. Look at all these fancy numbers! I don’t understand them but so impressive”

GS User 4 - “you guys sound pissed about the performance of your prized converters”



Spreading misinformation is bad form in an event. Doing it on a gear forum pretending like it means something is irresponsible and contributes to everyone being dumber in general.

When it’s obvious people can’t discern truth in this pile of nonsense those with the ability to understand it’s not a proper test for quoting converter quality are obligated to speak up.

The world is less intelligent because everyone believes what they hear on tv and read on the internet. This thread is a disservice to all audio interface manufactures and this forum because it’s utter nonsense and meaningless and could influence purchasing decisions by future readers basing part of their decision on bs misinformation masquerading as scientific proof.

This thread should have a warning label at the top of every thread.

Warning - these tests are pointless and should be read for entertainment purposes only.
Old 27th February 2020
  #1893
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by thestarfire View Post
I’m here to set the record straight.
The Messiah !

list of results
Old 27th February 2020
  #1894
Lives for gear
 
BrianVengeance's Avatar
 

Quote:
Originally Posted by thestarfire View Post
Couldn’t care less about this test. It’s totally pointless and done without method by a bunch of random people on the internet.

Worse they are compiling lots of numbers and playing pretend testing scientist proclaiming the test means something. This test is useless and the results mean nothing. That is just a matter of fact based on how it’s being conducted.

I’m here to set the record straight.

People are misleadingly quoting and believing that this test can be measure of interface quality. Look how many people are drinking the cool aid. 64 pages now.

They are then taking these results to other threads saying look how good this converter is or how shtty this one is all quoting this pointless test.

Then uneducated people or those without the critical thinking skills to evaluate the test or it’s methods start jumping in on the band wagon.

Cue GS User 1 - “da guy on da Gearslutz tested all 1000 interfaces in the world and this one is da bestest”

GS User 2 - “yea dat one is ranked 130 and is a piece of sht look at da test da guy did”

GS User 3- “yea all your interfaces are sht motu units from 1994 are da best look at the test this guy did it’s amazing. Look at all these fancy numbers! I don’t understand them but so impressive”

GS User 4 - “you guys sound pissed about the performance of your prized converters”



Spreading misinformation is bad form in an event. Doing it on a gear forum pretending like it means something is irresponsible and contributes to everyone being dumber in general.

When it’s obvious people can’t discern truth in this pile of nonsense those with the ability to understand it’s not a proper test for quoting converter quality are obligated to speak up.

The world is less intelligent because everyone believes what they hear on tv and read on the internet. This thread is a disservice to all audio interface manufactures and this forum because it’s utter nonsense and meaningless and could influence purchasing decisions by future readers basing part of their decision on bs misinformation masquerading as scientific proof.

This thread should have a warning label at the top of every thread.

Warning - these tests are pointless and should be read for entertainment purposes only.
This speaks far more to your vaulted perspective from which you assess other peoples' intelligence than anything - I wholeheartedly encourage you to continue underestimating others wholesale. These sort of things have a way of sorting themselves out on their own given enough time.

I've been following this thread for a while (even participated) and I'm yet to see anyone credibly claim that it is anything more than what it is - a simple loopback test designed to exercise DA and AD across multiple manufacturers units with a common recording.

The results are interesting, but anyone with half a lick of sense will properly contextualize the data and take it with the appropriate amount of sodium. If it factors into a purchase decision, then so be it - who are any of us to say how another should spend their own money? Get gear, make music, rinse, and repeat.

As for spreading misinformation, meh. This is an internet forum, and while ideally all of us should strive for The Truth (tm), this just isn't how the world works. Someone (or some number of someones) blathering about 20+ year old MOTU devices and how they stack rank against other gear is a) easy to identify and b) yet to have any meaningful impact on the sonic performance and workflow integration of my Metric Halo gear.

Identifying BS is a life skill we all need to develop. With experience comes good decisions. With bad decisions comes experience. We all had to start somewhere, and there is no reasonable expectation that any other person would make the same decisions we would make, or see the world as we do.

Getting worked up about this thread or the test is an especially bored tempest in a teapot.

Last edited by BrianVengeance; 27th February 2020 at 10:44 PM..
Old 28th February 2020
  #1895
Gear Maniac
 

Quote:
Originally Posted by thestarfire View Post
Couldn’t care less about this test. It’s totally pointless and done without method by a bunch of random people on the internet.

Worse they are compiling lots of numbers and playing pretend testing scientist proclaiming the test means something. This test is useless and the results mean nothing. That is just a matter of fact based on how it’s being conducted.

I’m here to set the record straight.

People are misleadingly quoting and believing that this test can be measure of interface quality. Look how many people are drinking the cool aid. 64 pages now.

They are then taking these results to other threads saying look how good this converter is or how shtty this one is all quoting this pointless test.

Then uneducated people or those without the critical thinking skills to evaluate the test or it’s methods start jumping in on the band wagon.

Cue GS User 1 - “da guy on da Gearslutz tested all 1000 interfaces in the world and this one is da bestest”

GS User 2 - “yea dat one is ranked 130 and is a piece of sht look at da test da guy did”

GS User 3- “yea all your interfaces are sht motu units from 1994 are da best look at the test this guy did it’s amazing. Look at all these fancy numbers! I don’t understand them but so impressive”

GS User 4 - “you guys sound pissed about the performance of your prized converters”



Spreading misinformation is bad form in an event. Doing it on a gear forum pretending like it means something is irresponsible and contributes to everyone being dumber in general.

When it’s obvious people can’t discern truth in this pile of nonsense those with the ability to understand it’s not a proper test for quoting converter quality are obligated to speak up.

The world is less intelligent because everyone believes what they hear on tv and read on the internet. This thread is a disservice to all audio interface manufactures and this forum because it’s utter nonsense and meaningless and could influence purchasing decisions by future readers basing part of their decision on bs misinformation masquerading as scientific proof.

This thread should have a warning label at the top of every thread.

Warning - these tests are pointless and should be read for entertainment purposes only.
It’s ok. Just sell your RME Babyface and buy something better.
Old 28th February 2020
  #1896
Gear Head
Quote:
Originally Posted by thestarfire View Post
Couldn’t care less about this test. It’s totally pointless and done without method by a bunch of random people on the internet.

Worse they are compiling lots of numbers and playing pretend testing scientist proclaiming the test means something. This test is useless and the results mean nothing. That is just a matter of fact based on how it’s being conducted.

I’m here to set the record straight.

People are misleadingly quoting and believing that this test can be measure of interface quality. Look how many people are drinking the cool aid. 64 pages now.

They are then taking these results to other threads saying look how good this converter is or how shtty this one is all quoting this pointless test.

Then uneducated people or those without the critical thinking skills to evaluate the test or it’s methods start jumping in on the band wagon.

Cue GS User 1 - “da guy on da Gearslutz tested all 1000 interfaces in the world and this one is da bestest”

GS User 2 - “yea dat one is ranked 130 and is a piece of sht look at da test da guy did”

GS User 3- “yea all your interfaces are sht motu units from 1994 are da best look at the test this guy did it’s amazing. Look at all these fancy numbers! I don’t understand them but so impressive”

GS User 4 - “you guys sound pissed about the performance of your prized converters”



Spreading misinformation is bad form in an event. Doing it on a gear forum pretending like it means something is irresponsible and contributes to everyone being dumber in general.

When it’s obvious people can’t discern truth in this pile of nonsense those with the ability to understand it’s not a proper test for quoting converter quality are obligated to speak up.

The world is less intelligent because everyone believes what they hear on tv and read on the internet. This thread is a disservice to all audio interface manufactures and this forum because it’s utter nonsense and meaningless and could influence purchasing decisions by future readers basing part of their decision on bs misinformation masquerading as scientific proof.

This thread should have a warning label at the top of every thread.

Warning - these tests are pointless and should be read for entertainment purposes only.
Sorry you feel that way in regard to certain aspects of your criticism, and you are quite welcome to your opinion as a fellow Gearslutz member, as we all are free to express our own opinions, be it praise or contempt. Contempt in your case.

I understand and to some extent agree that the 'compiling numbers', in regards to 'rank' of converters, side of the results table, doesn't exactly necessarily translate that a certain converter is absolutely better/ worse 'sound' wise, than a converter with slightly more/less rank, number wise. It is still a useful 'guide', especially when seeing consistency of results from different owners using the same gear.

I personally take more credence in using my actual ears, and listening to the actual sound files themselves, be it through a half decent monitor system, in a semi-treated, not exactly ideal room, rather than looking at the numbers in terms of 'rank', then jumping to a forgone conclusion numerically speaking.

Of course, most people would naturally expect higher 'numbers', for more expensive gear. Anyone receiving a lesser number, for their own 'newer' more modern gear, myself included, would knee jerk react and think 'hang on a minute, what's going on here, that can't be right!'.

But for those people like myself who can't justify spending a gazillion on the latest, greatest bit of kit, I think tests like this, can definitely be relevant, in helping decide on whether I am getting 'bang for my buck' with my current choice of AD/DA conversion/ clocking. Luckily I am.

Hey, my own latest 2018 RME UFX+, ranks ever so slightly lower than the original UAD Apollo Mark1 from 2015. Does my file still 'sound' as good? Sure it does, money well spent, no regrets. Then the latest 2019 UAD Apollo 6x ranks lower again than the 2015 model? Still sounds great regardless.

The main conclusion I was most happy with was how great some of the less expensive gear can actually sound in comparison to the top end 'cork sniffing' gear.

SPL Madison being an affordable standout, as far as D/A conversion, when in synchronised mode, and separately combined with the T.I. PCM4222 Evaluation module, or Focusrite Blue 245, as A/D.

Impressive sound, which made the orchestral file sound so clear top to bottom.

If only the SPL Madison A/D conversion was as good, they would sell like hotcakes.

Also interesting how certain converters with higher 'numbers' may sound clearer in certain parts of the sound spectrum (lows or mids or highs, but not necessarily all 3 at once).

I can now see why some top end studios use different converters depending on what the fundamental frequency is of the instrument being recorded.

Anyway I am really thankful to didier for compiling these results, and will continue to occasionally look in to listen for hopefully more surprising standouts.
Old 28th February 2020
  #1897
Gear Addict
 

Quote:
Originally Posted by thestarfire View Post
Couldn’t care less about this test. It’s totally pointless and done without method by a bunch of random people on the internet.

Worse they are compiling lots of numbers and playing pretend testing scientist proclaiming the test means something. This test is useless and the results mean nothing. That is just a matter of fact based on how it’s being conducted.

I’m here to set the record straight.


.....

The world is less intelligent because everyone believes what they hear on tv and read on the internet. This thread is a disservice to all audio interface manufactures and this forum because it’s utter nonsense and meaningless and could influence purchasing decisions by future readers basing part of their decision on bs misinformation masquerading as scientific proof.

This thread should have a warning label at the top of every thread.

Warning - these tests are pointless and should be read for entertainment purposes only.
The most stupid post I've read in years here

This kind of post should have a warning label at the top!
Old 4 weeks ago
  #1898
I have submitted results of most of the interfaces I've used on this thread as you can see from Lynx Hilo to Motu as it's nice to see how it compares, though admittedly I usually turn to Audio Science Review nowadays as it's more detailed comparisons. It's just too difficult to understand if it's the AD or DA that contributes to the score here. RMAA tests are the same as you're still bound to a loopback where you can't be sure if it's the AD or DA that's contributing the most to the score - though you get a bit more information out of those - being able to see what frequencies specifically may be distorting/etc

To me Audio Diffmaker isn't very well documented and runs poorly if at all on my computer. (It was last updated about 12 years ago ... so that doesn't help either) I suppose I consider the scores here as just one score out of many - It's nice to see the results, and it's nice that this thread keeps going: but it wouldn't stop me from buying an interface though ... If RMAA tests (Exound/prosound sites - you'll need a translator for them!) and AP analyzer tests at Audio Science Review or other test sites have good measurements: I'd personally go with that sort of testing to sway what interface I buy instead of the single number here.

It's not bad to have these tests though - just more information overall
Old 3 weeks ago
  #1899
Gear Head
 

Quote:
Originally Posted by Analogue Mastering View Post
These are all over the place, not every converter nulling to -59dBFS has nothing to do with accuracy, but all with impedance. Plenty of converters where the AD is less loud than the DA, leave alone phase, noise, interference and other issues.

The only way this would make sense is, 1 adc to test all dacs, 1 dac to test all adc’s and 1 cable used in 1 location.
Then A and B are contant, which would award difference to C (or compatibility issues between A and C)
It takes an incredibly stubborn person to look at a list of results performed by different people with different cables and different serial numbers of the same model DAC/ADC that align almost perfectly and then claim those results and the test itself is plagued by the effects of randomness such as "refrigerators running nearby" and the like.

While I'll concede that there are all sorts of factors like that present that differ from test to test, they simply do NOT show up as a significant factor in the test results.

You have a unique capability to ignore the data. Let me ask you two questions:
1. How many tests on the same model of DAC/ADC by different people must align perfectly for you to concede that random microwave ovens and refrigerators are insignificant factors? 30? 100?
2. Were you on the O.J. jury by chance?
Old 3 weeks ago
  #1900
Lives for gear
 
Analogue Mastering's Avatar
Quote:
Originally Posted by capn357 View Post
It takes an incredibly stubborn person to look at a list of results performed by different people with different cables and different serial numbers of the same model DAC/ADC that align almost perfectly and then claim those results and the test itself is plagued by the effects of randomness such as "refrigerators running nearby" and the like.

While I'll concede that there are all sorts of factors like that present that differ from test to test, they simply do NOT show up as a significant factor in the test results.

You have a unique capability to ignore the data. Let me ask you two questions:
1. How many tests on the same model of DAC/ADC by different people must align perfectly for you to concede that random microwave ovens and refrigerators are insignificant factors? 30? 100?
2. Were you on the O.J. jury by chance?
If you have any affinity with manufacturing, you can look up things like lower/upper control limit and put things like 0.5dB difference in perspective on identical unit. Are they outliers or a trend? You tell me.... Another concept to think about is that since result is sum of output (DAC) and input (ADC), on units where either the output or input isn’t properly calibrated you create an offset which is magnified when Didier normalizes.

So, if DAC is -2dB on the input, the recording is already -2dB lower, which when normalized by Didier will result in a relatively increased noisefloor, easily causing a 2bB difference between converter A and B

The other perspective is, to just feed the noisefloor of one DAC into the ADC of another and just look at SPAN to see what that looks like. This also puts the “correlation depth” results into perspective.

Finally I’ve given the suggestion to use REW instead, as you can calibrate cables.
I don’t ignore data, I shoot the data as it’s flawed. You just seem to believe, rather than know.

This test is as random, as asking people on a forum to report their car fuel consumption.
Old 3 weeks ago
  #1901
Gear Head
 

Quote:
Originally Posted by Analogue Mastering View Post

So, if DAC is -2dB on the input, the recording is already -2dB lower, which when normalized by Didier will result in a relatively increased noisefloor, easily causing a 2bB difference between converter A and B
WRONG! This would ONLY be true if the quantity you're trying to measure (RMS difference between levels in this case) is near the noise floor of the devices, which it isn't. The noise floor of these devices is several orders of magnitude below the RMS difference quantity being measured. You need only examine the data to provide insight into this reality

Lynx Hilo
Agustin Mongelli, Line Out Trim 0dBV, Line In Trim 2dBV: 1,9 B (L), 1,9 dB (R)..Corr Depth: 41,9 dB (L), 43,7 dB (R) Difference*: -59.1 dBFS (L), -59.2 dBFS (R)
capn357: 0.0 dB (L), 0.1 dB (R)..Corr Depth: 41,8 dB (L), 43,6 dB (R) Difference*: -59.1 dBFS (L), -59.1 dBFS (R)

MOTU 828 MkII
laurend: 6.1 dB (L), 6.2 dB (R) Corr Depth: 36,4 dB (L), 37,9 dB (R) Difference*: -56.7 dBFS (L) -58.0 dBFS (R)
MusicManic: -0.1 dB (L), -0,0 dB (R) Corr Depth: 36,3 dB (L), 37,9 dB (R) Difference*: -56.7 dBFS (L) -58.0 dBFS (R)

In the first example, there was a 1.9 dB difference in levels between the two test cases and only .1 dB measurable difference in one of the channels and in the second example, there was a 6 dB difference in levels between the two test cases and no measurable difference in either test channel

Maybe if we inserted a crappy vinyl record player into the test protocol with <50dB of dynamic range, you'd have a point, but not so much with these DACs/ADCs.

Quote:
Originally Posted by Analogue Mastering View Post
This test is as random, as asking people on a forum to report their car fuel consumption.
WRONG! I dare say you would see a significant increased variance in the results of the car fuel consumption test than is evident in Didier's results. Again, an examination of the data should make that obvious to anyone willing to see.
Old 3 weeks ago
  #1902
Here for the gear
Quote:
Originally Posted by Analogue Mastering View Post
You’re right, mine was a rough guesstimate, but still 3%-4% in the same product is a lot no?
If you are good with math you can easily calculate that +/-0.5dB variance (as stated in many "professional" products) is 12.25% of the difference, and 3-4% translates to +/-0.125 to 0.172 dB from sample to sample.

Quote:
Originally Posted by Analogue Mastering View Post
...
This test is as random, as asking people on a forum to report their car fuel consumption.
The test is absolutely not random, and results are very repeatable, however, it really far from ideal, because of missing methodology.

Quote:
Originally Posted by didier.brest View Post
the more transparent is the DAD conversion.

Further explanation here.
If so, then why DAD AX24 is measured on par with Behringer ADA8200 and much worse than Motu 2408mk3, Mytek, and even ESI [email protected]?

Quote:
Originally Posted by didier.brest View Post
...
MOTU 2408mk3 (ram75)
0,1 dB (L), 0,1 dB (R) Corr Depth: 36,3 dB (L), 37,9 dB (R) Difference*: -56,7 dBFS (L), -58,1 dBFS (R)

Mytek 8 x 192 AD/DA
MainTime: 0.4 dB (L), 0.5 dB (R) Corr Depth: 35,3 dB (L), 36,9 dB (R) Difference*: -55.8 dBFS (L) -57.1 dBFS (R)
mudseason: -0.1 dB (L), -0.1 dB (R)..Corr Depth: 35,8 dB (L), 37,4 dB (R) Difference*: -56.3 dBFS (L), -57.7 dBFS (R)

ESI [email protected] (Agustin Mongelli)
5.6dB (L), 5.7dB (R) Corr Depth: 33,0 dB (L), 34,6 dB (R) Difference*: -53.4 dBFS (L), -54.7 dBFS (R)
...


Apogee Symphony I/O
(Ajantis) 0,0 dB (L), 0,0 dB (R) Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference*: -51,2 dBFS (L), -50,1 dBFS dB (R)
Diegel -0.2 dB (L), -0.1 dB (R)..Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference*: -51.1 dBFS (L), -50.1 dBFS (R)

MOTU 16A (davidbayles)
728.5 µs, 1.518 dB (L) 1.513 dB (R) Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference*: -51.1 dBFS (L), -50.1 dBFS (R)

MOTU 8A
marluck:
3.3 dB (L), 3.3 dB (R)..Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference*: -51.1 dBFS (L), -50.1 dBFS (R)
Zek: 1.2 dB (L), 1.1 dB (R)..Corr Depth: 43,4 dB (L), 45,7 dB (R) Difference*: -51.1 dBFS (L), -50.0 dBFS (R)
...

DAD AX24 ([email protected])
-0,1 dB (L), -0,1 dB (R) Corr Depth: 27,4 dB (L), 28,9 dB (R) Difference*: -46.8 dBFS (L), -47.3 dBFS (R)

Behringer ADA8200 (didier.brest)
gain @ 10 o'clock : 4.3 dB (L), 4.6 dB (R)..Corr Depth: 27,6 dB (L), 29,2 dB (R) Difference*: -46.6 dBFS (L) , -47.7 dBFS (R)
gain mini : 14,7 dB (L), 14,3 dB (R)..Corr Depth: 27,9 dB (L), 29,5 dB (R) Difference*: -46,8 dBFS (L) , -47,8 dBFS (R)
...
And if I understand correctly, original Apogee Symphony I/O, MOTU 8A and 16A outperforms ALL other ADDA's in 0.1-12kHz band by your test results (except some dedicated ADC's and DAC's). It's strange when full range results can be better than a limited band. There is a serious flaw in the interpretation of recorded loopbacks, probably of lack of understanding of the process and received numbers, or even incorrectly selected tools to gather data.

But, IMHO, these results can be useful if you want to check a used converter before buying (by direct comparison of received numbers).

P.S.: It is a pity that such great work, with a lot of people involved, was done almost in vain.
But, on the other hand -
Quote:
Originally Posted by didier.brest View Post
...
for subjective evaluation, listening to music is more enjoyable than listening to a sine sweep.
Old 3 weeks ago
  #1903
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by EdwardZ7 View Post
If so, then why DAD AX24 is measured on par with Behringer ADA8200 and much worse than Motu 2408mk3, Mytek, and even ESI [email protected]?
Are these measurements not in agreement with your listening of the loopback files ?
Old 3 weeks ago
  #1904
Here for the gear
Quote:
Originally Posted by didier.brest View Post
Are these measurements not in agreement with your listening of the loopback files ?
They sounds almost identical in my $100 phone and $10 earbuds

You made the recommendation for DAD converters, which ranked below the average in your list.
So by this logic - these measurements not in agreement with YOUR listening of the loopback files.
Old 3 weeks ago
  #1905
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by EdwardZ7 View Post
You made the recommendation for DAD converters,

Where ? When ?

list of the results
Old 3 weeks ago
  #1906
Gear Head
 

Clarification for EdwardZ7

Quote:
Originally Posted by didier.brest View Post

Where ? When ?

list of the results
Quote:
Originally Posted by blayz2002 View Post


I have read through this thread some time ago and I'm not really technically knowledgeable to really understand what the outcome of the tests represents here. It's also probably been explained already, but I assume the higher up the list an interface is, the better quality the AD, DA, or both are? Or at least that is what should be expected?

What I really want to ask is, if I wanted to use this ranking to give a guide to purchasing a high quality and accurate DA converter, would I be barking up the wrong tree?

Sorry if anyone has explained this already, and if so feel free to just direct me to the post (if not too much trouble).
Quote:
Originally Posted by blayz2002 View Post


the higher up the list an interface is, the better quality the AD, DA, or both are?
Quote:
Originally Posted by didier.brest View Post
the more transparent is the DAD conversion.

Further explanation here.
In the context of the thread, didier.brest is responding to blayz2002 and referring to the round trip results of the test (DAD; aka DA-AD) and what that means, not comparing anything to Digital Audio Denmark (DAD) converters. Therefore, no contradiction and no recommendation of any specific converters either.
Old 2 weeks ago
  #1907
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by BenF View Post
In the context of the thread, didier.brest is responding to blayz2002 and referring to the round trip results of the test (DAD; aka DA-AD) and what that means, not comparing anything to Digital Audio Denmark (DAD) converters. Therefore, no contradiction and no recommendation of any specific converters either.
Many thanks for making this clear !
Old 2 weeks ago
  #1908
Here for the gear
Quote:
Originally Posted by didier.brest View Post

Where ? When ?

list of the results
You have a unique ability to do not notice your own written text:
Quote:
Originally Posted by didier.brest View Post
the more transparent is the DAD conversion.

Further explanation here.
I think this discussion should be over.

P.S.: I really appreciate your work. If you wanna continue to proceed new "results" -
I'll record E-MU 0404USB, Motu 624 and, maybe, one more 2408mk3 - if I get into the attic
Old 2 weeks ago
  #1909
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by EdwardZ7 View Post
You have a unique ability to do not notice your own written text
that you extracted from a post I wrote one and half month ago, without the quote that this text was completing, which changes totally its meaning. Fortunately BenF found the post you was alluding to.

Quote:
Originally Posted by EdwardZ7 View Post
I'll record E-MU 0404USB, Motu 624 and, maybe, one more 2408mk3 - if I get into the attic
Your tests will be welcome.

Last edited by didier.brest; 2 weeks ago at 11:27 AM..
Old 2 weeks ago
  #1910
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by EdwardZ7 View Post
And if I understand correctly, original Apogee Symphony I/O, MOTU 8A and 16A outperforms ALL other ADDA's in 0.1-12kHz band by your test results (except some dedicated ADC's and DAC's). It's strange when full range results can be better than a limited band. There is a serious flaw in the interpretation of recorded loopbacks, probably of lack of understanding of the process and received numbers, or even incorrectly selected tools to gather data.
I have already acknowledged the weirdness of these results from Audio DiffMaker and that I don't know exactly what is the correlated null depth (what Corr. Depth. in ADM measurement report stands for). It seems to be a rather unusual figure: googling correlated null depthh gives only two relevant links that relate to ADM (and this Gearslutz thread...) without saying what it is.

I suspected that the reason why the correlated null depth may be very different from the ratio of the original level to the difference level, which is the well known null depth, might be that it is measured in the 100 Hz -12 kHz band by ADM with its default settings while my audio RMS level measurements are full band.

But it appeared being untrue.

Anyway the correlated null depth relevance does not matter much here because the loopback tests are ranked according to the difference levels, initially measured by means of Wavelab from the difference wav files provided by ADM (Difference figures in the list of the results) and now computed in Matlab without any contribution from ADM (Difference* figures in the list of the results). I still report here Corr. Depth., Difference and Difference* figures for each new test but don't keep Difference ones in the list of results. Maybe I should also disregard Corr. Depth. figures. I am reluctant to do so because they are the only ones (except for some Difference figures of old tests that I did not reprocess in Matlab) coming from ADM, without which I would not have started this thread.


list of the results

Last edited by didier.brest; 1 week ago at 04:44 PM.. Reason: Adding a link to the list of the results
Old 1 week ago
  #1911
Here for the gear
 

Does anyone have an Antelope Orion 32hd gen 3 that they would be willing to run this test with? I've asked before but I know a lot of us are stuck inside at the moment so I figure it's worth another ask. I'm just really curious how it fares. I can't get my hands on one locally to demo and all the places I could buy from are special order/no returns. I'm not committing to that without having more info. I find this particular test interesting as I would be doing a digital patch bay of sorts with a lot of analog outboard gear. In some cases that will mean several passes back and forth through the converters.

I'm using a motu 16a right now and might just get a second motu unit to meet my channel count requirements. The Antelope has more appeal for me though if the conversion is a definite upgrade. Thanks!
Old 1 week ago
  #1912
Here for the gear
Ok, finally I understood what purpose of this test - for OOB mastering.
But for other goals, it's not informative, unfortunately.
Why? The answer is pretty simple - clocking. It remains the same (synced) for AD and DA during loopback test, but in everyday usage, it can fluctuate, and sometimes deeply, so two digitized sine waves from a generator can differ more than actual loopbacks.

Anyway, there is a link for loopback of E-MU 0404 USB, and MOTU 624 AVB (1 take with output at 0, and input at +2dB, another take - with output at -20dB and input at +22dB): https://drive.google.com/open?id=17s...QkH1899vArx7di
Old 1 week ago
  #1913
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by EdwardZ7 View Post
Anyway, there is a link for loopback of E-MU 0404 USB, and MOTU 624 AVB (1 take with output at 0, and input at +2dB, another take - with output at -20dB and input at +22dB): https://drive.google.com/open?id=17s...QkH1899vArx7di
E-MU 0404
ADM: 3,642usec, 0,049dB (L), -0,031dB (R)..Corr Depth: 22,4 dB (L), 24,1 dB (R) Difference: -42.1 dBFS (L), -43.1 dBFS (R)
Matlab: -1.182 µs, 0.0454dB (L), -0.0368 dB (R), Difference: -42.9 dBFS (L), -44.3 dBFS (R)

MOTU 624, -20dB output and +22dB input
ADM: -2,07usec, 0,015dB (L), -0,007dB (R)..Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45.9 dBFS (L), -45.5 dBFS (R)
Matlab: 2.726 µs, 0.0117 dB (L), -0.0098 dB (R), Difference: -51.1 dBFS (L), -50.1 dBFS (R)

MOTU 624, 0dB output and +2dB input
ADM: -2,018usec, -0,001dB (L), -0,003dB (R)..Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45.9 dBFS (L), -45.5 dBFS (R)
Matlab: 2.780 µs, -0.0039 dB (L), -0.0058 dB (R), Difference: -51.1 dBFS (L), -50.1 dBFS (R)

To be added to the next issue of the list of the results.
Old 1 week ago
  #1914
Gear Nut
 

This thread is a great service to all people interested in buying a new converter and knowing which level of transparency each and every company offers to their customers, which I hugely appreciate and would like to contribute too but there is one question on my mind to whoever is in charge of publishing the average phaseout level: do you just match with 1 sample accuracy or even go on a sub sample accuracy level? Because I think that a lot of the findings are wrong if you only moved the tracks by 1 sample accuracy because some filters are sub sample level shifted.
Old 1 week ago
  #1915
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by spankjam View Post
do you just match with 1 sample accuracy or even go on a sub sample accuracy level?
Subsample accuracy of course ! The time shift correction value is given for each new test but not reported in the list of the results. For instance, it is 2.780 µs for the test the resullt of which in my post of two days ago:

Quote:
Originally Posted by didier.brest View Post
MOTU 624, 0dB output and +2dB input
ADM: -2,018usec, -0,001dB (L), -0,003dB (R)..Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45.9 dBFS (L), -45.5 dBFS (R)
Matlab: 2.780 µs, -0.0039 dB (L), -0.0058 dB (R), Difference: -51.1 dBFS (L), -50.1 dBFS (R)
That is less than 1 sample time shift = 1 / 44.1 kHz = 22.676 µs.

list of the results.

Last edited by didier.brest; 5 days ago at 01:56 PM..
Old 6 days ago
  #1916
Gear Nut
 

Quote:
Originally Posted by didier.brest View Post
Subsample accuracy of course ! The time shift correction value is given for each new test but not reported in the list of the results. For instance, it is 2.780 µs for the test the resullt of which in my post of two days ago:



That is less than 1 sample time shift = 1 / 44.1 kHz = 22.676 µs.

list of the results.
May I ask how you're enjoying Matlab to do it? I used something else manually or is matlab a manual process as well, although you'd need to upsample to do these small movements no?

I have Wolfram mathematica.
Old 6 days ago
  #1917
Lives for gear
 
didier.brest's Avatar
Quote:
Originally Posted by spankjam View Post
May I ask how you're enjoying Matlab to do it? I used something else manually or is matlab a manual process as well
The computation of the L and R difference RMS levels from the time delay and L and R gain correction values, given here with (most often more than) adequate accuracy for each test before rounded values for RMS levels and gains only are included in the list of the results, by means of any mathematical tool performing Fourier transforms is quite simple, as shown here for Matlab. So everybody able to use such a tool, can check than the difference level values are correct for these given time delay and gain values, computation of which may be managed as a standard optimization problem, as done, I guess, in ADM. Initially I was doing such a computation in Matlab 'manually' as you said. I'm using now a script (not finalized enough for being shared), which make this task easier and faster. I'm confident that I am getting the time delay and gains values that allow for minimal maximum of L and R difference levels within 0.1 dB accuracy but if someone would find that a lower maximum of L and R difference levels is achievable, he/she would be welcome for giving here the corresponding alternative time delay and gains corrections values.

Last edited by didier.brest; 5 days ago at 01:57 PM..
Old 3 days ago
  #1918
Lives for gear
 
BrianVengeance's Avatar
 

Quote:
Originally Posted by Analogue Mastering View Post
This test is as random, as asking people on a forum to report their car fuel consumption.
You’re making an argument of precision, not utility. The example you listed still has utility, and anyone but a halfwit would be able to integrate this with other bits of information to compile a larger picture.
Old 2 days ago
  #1919
Lives for gear
 
Analogue Mastering's Avatar
Quote:
Originally Posted by BrianVengeance View Post
You’re making an argument of precision, not utility. The example you listed still has utility, and anyone but a halfwit would be able to integrate this with other bits of information to compile a larger picture.
The analogy is correct, there are published specs and there are uncontrolled measured results (user contributions) there are other bits of information in both scenario’s. How random or relevant they are is debatable. Even a halfwit would understand that. The beauty of these results are in they eye of the beholder. Forest and trees.......

A better example of utility would be eu power is 220v, however when u measure it will differ from location between 210-230v, even more so on the same location it fluctuates during the day. On a measurement scale of 1-10kvolt it’s all very close, but measuring up to a 10% variance towards a 220v target the difference is huge. It’s all about interpeting results and context.
Old 2 days ago
  #1920
Lives for gear
 
BrianVengeance's Avatar
 

Quote:
Originally Posted by Analogue Mastering View Post
The analogy is correct, there are published specs and there are uncontrolled measured results (user contributions) there are other bits of information in both scenario’s. How random or relevant they are is debatable. Even a halfwit would understand that. The beauty of these results are in they eye of the beholder. Forest and trees.......
Bias can occur in the interpretation of any data set, lab controlled or otherwise - on this I believe we are aligned. Torture any set of facts long enough, and they will confess whatever one wishes.

Quote:
Originally Posted by Analogue Mastering View Post
A better example of utility would be eu power is 220v, however when u measure it will differ from location between 210-230v, even more so on the same location it fluctuates during the day. On a measurement scale of 1-10kvolt it’s all very close, but measuring up to a 10% variance towards a 220v target the difference is huge. It’s all about interpeting results and context.
Again, I believe we are aligned. 220v would be the nominal target for power line voltage in the EU. Data from precision measurements would yield variability based on the parameters you listed, but with that granularity comes increased coupling to those very testing conditions. In other words, rigorous testing of the power in your space will tell me little to nothing about what I can expect out of rigorous testing of power my space.

Additionally, if I am dependent on that level of granularity in the data, I likely wouldn't be looking at data sets like the one this test generates beyond a superficial level. Utility is quite coupled to need, if one needs more or different from the data, this isn't the set for them.

To bounce back to the forum gas mileage analogy, the utility is in asking broad questions like "what is the fuel consumption performance under random, real world conditions". This cannot be tested in a lab, and variability in procedure must be accounted for (hence some bit of normalization and controlling for parameters). Statistics and data sciences are entire fields devoted to exactly this sort of thing.

It won't tell you anything about the car's driving dynamics, polish of the interior, or behavior of the infotainment center, but it also isn't intended to do so.
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump