The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
Evaluating AD/DA loops by means of Audio Diffmaker Audio Interfaces
Old 13th September 2012
  #151
Gear interested
 

Quote:
Originally Posted by genelec79 View Post
It looks like that there is no more "ultimate thread" !!?!?!?!
SUPPORT FOR nms !!!!!

First post here but a very long time reader. I should have voiced my support for NMS long ago as his (ours): 'Ultimate Converter Test' has been very informative-especially since the lack of any meaningful comparitive converter analysis elsewhere available. I am sorry the thread has been negated as I really considered it one of the best-the most helpful reads regarding hardware I've ever read-unbiased-a very hard thing to find. Here's my vote to continue this thread here or elsewhere. Thanks NMS for all of your effort and time put into this matter-it really has been helpful and informative to many of us.
Old 13th September 2012
  #152
nms
Lives for gear
 
nms's Avatar
Old 13th September 2012
  #153
Lives for gear
Quote:
Originally Posted by nms View Post
Thanks guys. If you have anything to say in support here's the link:


https://www.gearslutz.com/board/forum...st-thread.html

We needed moderator assistance there. Not just sitting back and letting things escalate & deteriorate til I get penalized for dealing with it!
I have done

Thanks again NMS.
Old 16th September 2012
  #154
Lives for gear
 
didier.brest's Avatar
 

Thread Starter
DA-AD loops from this wav file evaluated by means of Audio DiffMaker.

ADDA converters


Lynx Hilo (CoolColJ)
Corr Depth: 41,8 dB (L), 43,7 dB (R), Difference: -56,8 dBFS (L), -57,2 dBFS (R)

MOTU 828mk2 (laurend)
Corr Depth: 36,4 dB (L), 37,9 dB (R) Difference: -55.7 dBFS (L) -57.0 dBFS (R)

Mytek 8 x 192 (MainTime)
Corr Depth: 35,3 dB (L), 36,9 dB (R) Difference: -55.2 dBFS (L) -56.0 dBFS (R)

Aurora 8 (cylens)
Corr Depth: 34,6 dB (L), 36,3 dB (R) Difference: -54.2 dBFS (L) -55.1 dBFS (R)

TC Electronics Impact Twin (AndG)
Corr Depth: 32,0 dB (L), 33,6 dB (R) Difference: -51.7 dBFS (L) -52.7 dBFS (R)

RME HDSP 9632 (schtim)
Corr Depth: 31.4 dB (L), 33.0 dB (R) Difference: -51.2 dBFS (L) -52.1 dBFS (R)

Prism Orpheus (MainTime)
Corr Depth: 30,8 dB (L), 32,4 dB (R) Difference: -51.1 dBFS (L) -52.4 dBFS (R)

RME Fireface 400 (didier.brest)
Corr Depth: 30,2 dB (L), 31,8 dB (R) Difference: -49.8 dBFS (L) -50.8 dBFS (R)

TC Electronics Impact Twin (ben)
Corr Depth: 29,1 dB (L), 30,8 dB (R) Difference: -48,7 dBFS (L), -49,7 dBFS (R)

Echo Audiofire 12 (guze)
Corr Depth: 28,8 dB (L), 30,1 dB (R) Difference: -48.4 dBFS (L) -49.2 dBFS (R)

E-MU 1616m (left channel only, ben)
Corr Depth: 28,9 dB (L) Difference: -45,9 dBFS (L)

Apogee Symphony I/O (Ajantis)
Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45,9 dBFS (L), 45,5 dBFS dB (R)

Echo Audiofire 4 (david1103)
Corr Depth: 25,6 dB (L), 27,4 dB (R) Difference: -45,2 dBFS (L), -46,3 dBFS (R)

Metric Halo ULN-2 (DownSideUp)
Corr Depth: 25,3 dB (L), 26,9 dB (R) Difference: -44,9 dB (L), 46,0 dB (R)

Echo Audiofire 8 non adat CS4272 (guze)
Corr Depth: 26,4 dB (L), 28,2 dB (R) Difference: -40.6 dBFS (L) -40.6 dBFS (R)

RME Multiface II - Unbalanced, -10 dBV (juki, Audiofanzine)
Corr Depth: 22,7 dB (L), 24,4 dB (R) Difference: -42,4 dBFS (L) -43,5 dBFS (R)

RME Fireface 800 (genelec79)
Corr Depth: 22,3 dB (L), 23,9 dB (R) Difference: -41,9 dB (L), -43,0 dB (R)

RME Multiface II - Balanced, 4 dBu (juki, Audiofanzine)
Corr Depth: 22,3 dB (L), 23,9 dB (R) Difference: -41,9 dBFS (L) -43,0 dBFS (R)

Apogee Duet2 (isma)
Corr Depth: 19,8 dB (L), 21,4 dB (R) Difference: -40.2 dBFS (L) -41.6 dBFS (R)

Focusrite Saffire Pro 24 DSP (CoolColJ)
Corr Depth: 20,2 dB (L), 22,0 dB (R) Difference: -38,9 dBFS (L), -39,9 dBFS (R)

RME Fireface UFX/1,2/3,4 (didier.brest)
Corr Depth: 18,5 dB (L), 20,1 dB (R) Difference: -38,9 dBFS (L), -40,3 dBFS (R)

RME Fireface UFX/5,6/3,4 (didier.brest)
Corr Depth: 18,4 dB (L), 20,1 dB (R) Difference: -38,8 dBFS (L), -40,3 dBFS (R)

Yamaha Steinberg MR816X (didier.brest)
Corr Depth: 19,1 dB (L), 21,0 dB (R) Difference: -38.6 dBFS (L) -40.0 dBFS (R)

RME Fireface UFX/1,2/9,10 (didier.brest)
Corr Depth: 14,7 dB (L), 16,2 dB (R) Difference -35,03 dBFS (L) -36,30 dBFS (R)

Edirol FA-66 (didier.brest)
Corr Depth: 15,6 dB (L), 17,1 dB (R) Difference: -34.4 dBFS (L) -35.4 dBFS (R)

RME Babyface (didier.brest)
Corr Depth: 9,2 dB (L), 10,8 dB (R) Difference: -28.8 dBFS (L) -29.8 dBFS (R)


Sets of different converters


Aurora8 clocked by RME HDSPe RayDAT (cylens)
Corr Depth: 34,6 dB (L), 36,3 dB (R) Difference: -55.3 dBFS (L) -56.5 dBFS (R)

Echo Audiofire 8 with ADAT clocked by RME HDSP 9632 (schtim)
Corr Depth: 25.9 dB (L), 27.5 dB (R) Difference: -45.5 dBFS (L) -46.5 dBFS (R)

M-Audio Profire2626 clocked by RME HDSPe RayDAT (cylens)
Corr Depth: 24,4 dB (L), 25,9 dB (R) Difference: -44.1 dBFS (L) -45.0 dBFS (R)

M-Audio Profire2626 clocked by Lynx Aurora8 (cylens)
Corr Depth: 24,4 dB (L), 25,9 dB (R) Difference: -44.0 dBFS (L) -44.9 dBFS (R)

RME Fireface 800 clocked by Yamaha 02R (genelec79)
Corr Depth: 22,3 dB (L), 23,9 dB (R) Difference: -41,9 dB (L), -43,0 dB (R)

RME Babyface --> RME Fireface 400 (didier.brest)
Corr Depth: 16,6 dB (L), 18,4 dB (R) Difference -29,4 dBFS (L), -28,8 dBFS (R)

Benchmark DAC1 phone output ---> Echo Audiofire 4 (david1103)
Corr Depth: 10,4 dB (L), 10,2 dB (R) Difference: -22,6 dBFS (L), -23,1 dBFS (R)

For contributing to this test :
- download the test 2 x 44 kHz x 24 bit wav file (31 Mb),
- cable link the 2 DAC outputs to 2 ADC inputs,
- play back the test file to the DAC outputs and record from the ADC inputs,
- upload to mediafire or any other file sharing site the 2 x 44 kHz x 24 bit wav file from this recording and
- provide the download link here with information about ADC and DAC identities.
Thank you!
Old 17th September 2012
  #155
Gear Head
 

Apogee Symphony I/O (Ajantis)
Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45,9 dBFS (L), 45,5 dBFS dB (R)

Is everything fine with this measurements,or i dont understand the placement order in this chart?
Thanks
Old 17th September 2012
  #156
Gear Head
 

Quote:
Originally Posted by genelec79 View Post
Apogee Symphony I/O (Ajantis)
Corr Depth: 43,4 dB (L), 45,6 dB (R) Difference: -45,9 dBFS (L), 45,5 dBFS dB (R)

Is everything fine with this measurements,or i dont understand the placement order in this chart?
Thanks
edit: i find answer on previous page.
Old 21st September 2012
  #157
we need the new MOTU track 16 !!!
Old 24th September 2012
  #158
Gear Nut
 

NMS!!!!! nice work and interesting results
Old 25th September 2012
  #159
Lives for gear
 

Quote:
Originally Posted by BasspirO View Post
NMS!!!!! nice work and interesting results
This thread is a different one... the OP and organizer of these results is by didier.brest.

NMS' famous (and also very good) thread got nixed after he pulled a "Billie Joe Armstrong" and yanked all the data...
Old 29th September 2012
  #160
Lives for gear
 
CoolColJ's Avatar
 

didier.brest - you should just update the Original post with new results instead of replying.... otherwise the thread will become excessively large


------

I should submit some Motu 24 I/O files, I have a feeling it's not as good as the 2408 mk3 (which should test the same as the 828 mk2).

Just listening to the test recordings of my Jupiter 8 I did on the Hilo, 2408 mk3 and 24 I/O, the 2408 mk3 has more in common with the Hilo, especially in the bottom end. 24 I/O sounds a touch crisper/brighter, probably because it captures less bottom end. Just became more noticable since I got my Focal Sm9.....

24 inputs and outputs means corners have to be cut after all
So I am thinking of selling my 24 I/O and grabbing another 2408 mk3 to go along with the two I already have, the PCIe card can support up to 4 after all.
Old 7th October 2012
  #161
Gear Nut
 

Comparing the Diffmaker result with loopback frequency response (RMAA), there is a very strong correlation between the two.
Old 10th October 2012
  #162
Gear Nut
Can somebody do a Digidesign converter loop test please!
Old 10th October 2012
  #163
nms
Lives for gear
 
nms's Avatar
Hey didier.. just a heads up on a couple things. Chances are I won't be around here much from here forwards so just a couple things I wanted to mention.

The first and foremost area of room for error with null tests is the alignments. Converters each have a different amount of latency, and when that latency falls on a value that is halfway between samples I think that could be one detractor. Units that delay the audio by an even sample number would fare better. Diffmaker's time alignment has a bit of room for error but it does allow for these subsample alignments which is why it's so important to enable for the units which need it. It seems to do a somewhat flawless job at the gain matching though. I always turn the "initial gain step" resolution down to 0.01 which has improved results at times.

In the end, I felt the best approach was to accurately trim the sample start times so they can be nulled manually and that way you can use Diffmaker with time align enabled or disabled. It seems best to run each piece with and without it turned on and go with the method that produces the best result on each file. In some cases the autoalign will improve, in others it'll degrade, so this seems like a good way. You can also check the null file to listen for the one that sounds the most tightly aligned. When using the autoalignment, the check box to "limit evals" is also worth testing enabled or disabled as that can give a better or worse result often.

I definitely agree that the manually measured RMS figure of the null file is the better indicator to go by. Correlation depth seems odd at times. Whenever you measure the RMS figure though make sure to trim the start or end points if they're loud (this is the problem with not having pre/post roll on the test file) so you don't inadvertently include those in the measurement. I trim the first 10,000 samples and last 12,000 which seemed to always work and be a uniform approach.

I never had a chance to look into group delay but that too could contribute to poorer null test results. I think testing units for group delay could be interesting. The only software I'm familliar with that does it is Room EQ Wizard. I've only used it for treating my room but it seems like a promising candidate for measuring new angles.

I feel there are pros & cons as to whether or not to test with single passes or 10 passes. In 10 passes you've added 10 layers of conversion, so this "should" outweigh the margin for error in the alignment process more. On the other hand I could totally see Diffmaker's time alignment working more accurately on single pass files where there's less to confuse it. It could be worthwhile to compare units at several different passes just to explore whether a particular pass count would favor any one unit more.

For all tests I think it's essential to use a file that has a pre-roll with a sharp marker transient in it. I've attached what I use. There's 25,000 samples before the waveform and 10 peaks in the marker. The uniform shape and peak count works as a good beacon of where the alignment is at. I also include a 2 sec long 993hz sine wave at -1dbfs after the marker which gives you a reference to use for matching levels of a file. If the recorded tone peaks at -1.023 on the left channel I'd adjust the entire left channel by +0.02db then repeat the measurement and adjustment for left. You always want to measure with a bit of space from the ends so you're measuring the steadiest part of the tone. This gave me great level matching. It may be unneccessary with how well Diffmaker does there, but it's fairly quick to do and eliminates one more possibility of error.

Now, all of that said.. Diffmaker and our process still put the most transparent converter at the top of the list (Lynx Hilo IS the most transparent of them and ears & reports everywhere have generally agreed) and that says something. The high end stuff all did well. Orpheus trailed a little, but it's known for having a sound and still placed very well.

The units with the worst transient control (as visibly evident by the onset of the 60hz sine wave test I showed snapshots of) were also the same units that did worst in the null tests. I really can't see this to be coincidental and I do believe it supports what we were doing here. Sloppy transient control obviously throws off the ability to null well. That kind of slingshot effect on the low end is something that could cause clipping in places especially in this day where music (specifically the low end) is maximized with such a fine line separating good clipping vs ****ty sounding clipping. Any converter which suffers from ringing would also hurt in the null tests and in my test file with the separate tones you can view the ends of the waveforms at diff freqs and see the tails.

Anyhow, all I can think of at the moment.

Marker attached with 25,000 sample pre-roll.
Attached Files

marker.wav (153.0 KB, 870 views)

Old 10th October 2012
  #164
Lives for gear
 

Quote:
Originally Posted by nms View Post
just a heads up on a couple things. Chances are I won't be around here much from here forwards so just a couple things I wanted to mention.
Hope you can set up a dedicated page of your own somewhere with all that data you compiled, nms. Valuable stuff!

Cheers.
Old 14th October 2012
  #165
Lives for gear
 

Quote:
Originally Posted by nms View Post
Hey didier.. just a heads up on a couple things. Chances are I won't be around here much from here forwards so just a couple things I wanted to mention.

The first and foremost area of room for error with null tests is the alignments. Converters each have a different amount of latency, and when that latency falls on a value that is halfway between samples I think that could be one detractor. Units that delay the audio by an even sample number would fare better. Diffmaker's time alignment has a bit of room for error but it does allow for these subsample alignments which is why it's so important to enable for the units which need it. It seems to do a somewhat flawless job at the gain matching though. I always turn the "initial gain step" resolution down to 0.01 which has improved results at times.

In the end, I felt the best approach was to accurately trim the sample start times so they can be nulled manually and that way you can use Diffmaker with time align enabled or disabled. It seems best to run each piece with and without it turned on and go with the method that produces the best result on each file. In some cases the autoalign will improve, in others it'll degrade, so this seems like a good way. You can also check the null file to listen for the one that sounds the most tightly aligned. When using the autoalignment, the check box to "limit evals" is also worth testing enabled or disabled as that can give a better or worse result often.

I definitely agree that the manually measured RMS figure of the null file is the better indicator to go by. Correlation depth seems odd at times. Whenever you measure the RMS figure though make sure to trim the start or end points if they're loud (this is the problem with not having pre/post roll on the test file) so you don't inadvertently include those in the measurement. I trim the first 10,000 samples and last 12,000 which seemed to always work and be a uniform approach.

I never had a chance to look into group delay but that too could contribute to poorer null test results. I think testing units for group delay could be interesting. The only software I'm familliar with that does it is Room EQ Wizard. I've only used it for treating my room but it seems like a promising candidate for measuring new angles.

I feel there are pros & cons as to whether or not to test with single passes or 10 passes. In 10 passes you've added 10 layers of conversion, so this "should" outweigh the margin for error in the alignment process more. On the other hand I could totally see Diffmaker's time alignment working more accurately on single pass files where there's less to confuse it. It could be worthwhile to compare units at several different passes just to explore whether a particular pass count would favor any one unit more.

For all tests I think it's essential to use a file that has a pre-roll with a sharp marker transient in it. I've attached what I use. There's 25,000 samples before the waveform and 10 peaks in the marker. The uniform shape and peak count works as a good beacon of where the alignment is at. I also include a 2 sec long 993hz sine wave at -1dbfs after the marker which gives you a reference to use for matching levels of a file. If the recorded tone peaks at -1.023 on the left channel I'd adjust the entire left channel by +0.02db then repeat the measurement and adjustment for left. You always want to measure with a bit of space from the ends so you're measuring the steadiest part of the tone. This gave me great level matching. It may be unneccessary with how well Diffmaker does there, but it's fairly quick to do and eliminates one more possibility of error.

Now, all of that said.. Diffmaker and our process still put the most transparent converter at the top of the list (Lynx Hilo IS the most transparent of them and ears & reports everywhere have generally agreed) and that says something. The high end stuff all did well. Orpheus trailed a little, but it's known for having a sound and still placed very well.

The units with the worst transient control (as visibly evident by the onset of the 60hz sine wave test I showed snapshots of) were also the same units that did worst in the null tests. I really can't see this to be coincidental and I do believe it supports what we were doing here. Sloppy transient control obviously throws off the ability to null well. That kind of slingshot effect on the low end is something that could cause clipping in places especially in this day where music (specifically the low end) is maximized with such a fine line separating good clipping vs ****ty sounding clipping. Any converter which suffers from ringing would also hurt in the null tests and in my test file with the separate tones you can view the ends of the waveforms at diff freqs and see the tails.

Anyhow, all I can think of at the moment.

Marker attached with 25,000 sample pre-roll.
Thanks for all your hard work. Cheers
Old 14th October 2012
  #166
Lives for gear
 
andremattos's Avatar
 

OMG ..
E-MU 1616m is one step up from my symphony i/o
LoL
Old 15th October 2012
  #167
nms
Lives for gear
 
nms's Avatar
Quote:
Originally Posted by andremattos View Post
OMG ..
E-MU 1616m is one step up from my symphony i/o
LoL
Something's wrong there. I'm not sure if it was on the null alignment end or the recording end. When Ajantis sent me results for that Symphony in the other test thread he sent one set that was bad and one that was good. I'm not sure what got messed up. You have to be careful how you set your unit up for testing. You need to make absolute sure that nothing else is accessing the channels you are using to send and receive & turn off any monitoring that might be feeding back. And in the case of the Apogees you have to disable the soft clip function.
Old 16th October 2012
  #168
Gear interested
 

Would be great if someone provided a test with the AVID OMNI interface
Old 18th October 2012
  #169
Gear Maniac
 

could someone explain the data to me?

correaltion depth example? what does this mean?

what is compared here?

are the cards shown in the summary ordered on purpose (increasing decreasing - acccuracy?)
Old 18th October 2012
  #170
Lives for gear
 
didier.brest's Avatar
 

Thread Starter
Don't care about the correlation depth. It is a by-product from the cross-correlation performed between both files by AudioDiffmaker for getting time and gain correction. The larger this figure, the closer is the loopback to the original, but this measurement is not a full audio frequency range one. The difference level between the original and the time and level aligned loopback is a more comprehensive figure. The top-down ranking is from the best to the worst. You should listen to both the best, the Hilo, and the worst, the Babyface, to decide whether these figures relate to what you hear. Here they are:
Babyface
Hilo
I shall say that I don't hear much a difference ...
Old 18th October 2012
  #171
Lives for gear
 
CoolColJ's Avatar
 

depends on the audio being recorded, more difference in individual sounds than full mixes.
And the clocking makes a big difference.

My 2403 mk3 only sounds good when clocked from the Hilo
Old 18th October 2012
  #172
nms
Lives for gear
 
nms's Avatar
Quote:
Originally Posted by CoolColJ View Post
My 2403 mk3 only sounds good when clocked from the Hilo
I think it's in your head man. I have 2 identical takes of a music clip; one recorded with the 828mk2 on its own clock and one with it clocked off the Hilo. You should be able to tell them apart no problem if I link them then?

It's just a test of the ADC. The source is Hilo streaming from a second DAW.

I'm not saying there's no improvement, but it's certainly not night and day and people I've sent it to said they can't hear a difference.
Old 18th October 2012
  #173
Lives for gear
 
CoolColJ's Avatar
 

Quote:
Originally Posted by nms View Post
I think it's in your head man. I have 2 identical takes of a music clip; one recorded with the 828mk2 on its own clock and one with it clocked off the Hilo. You should be able to tell them apart no problem if I link them then?

It's just a test of the ADC. The source is Hilo streaming from a second DAW.

I'm not saying there's no improvement, but it's certainly not night and day and people I've sent it to said they can't hear a difference.
Did you listen to my JP8 clips in the thread you deleted?
Where I posted comparisons between Hilo, 2408 mk3, and 2408 mk3 clocked from Hilo?
Pretty big difference to me - you will notice it in stereo sounds.

I've attached them here again
Attached Files
Old 19th October 2012
  #174
Lives for gear
 
Quint's Avatar
Quote:
Originally Posted by nms View Post
I think it's in your head man. I have 2 identical takes of a music clip; one recorded with the 828mk2 on its own clock and one with it clocked off the Hilo. You should be able to tell them apart no problem if I link them then?

It's just a test of the ADC. The source is Hilo streaming from a second DAW.

I'm not saying there's no improvement, but it's certainly not night and day and people I've sent it to said they can't hear a difference.
Have you considered the fact that the reason you may not hear much of a difference is possibly due, not, to the high quality and comparability of the Motu clock (as you seem to suggest) to the Hilo clock but, instead, to the poor quality of the Motu clock which would result in the Motu's clock being unable to properly phase loop lock onto the incoming signal from the Hilo and take full advantage of a superior clock? This scenario could definitely result in no perceivable improvement by the Motu being clocked by the Hilo. The architecture of the slave clock matters in a situation like this.

I'm not necessarily saying that this is or isn't the case with the Motu but it is something to consider as external clocks can improve on the internal clock in some units but not necessarily in others.

That or you simply are unable to hear the difference, if the difference does actually exist.
Old 19th October 2012
  #175
Lives for gear
 
Quint's Avatar
Quote:
Originally Posted by CoolColJ View Post
Did you listen to my JP8 clips in the thread you deleted?
Where I posted comparisons between Hilo, 2408 mk3, and 2408 mk3 clocked from Hilo?
Pretty big difference to me - you will notice it in stereo sounds.

I've attached them here again
I'm not listening on my monitors so I wouldn't even begin to try to make a sound comparison here. That being said, are you aware that the clip you used for the Hilo file is definitely a different one than what you used for the other two? Maybe you posted the wrong file on accident? You can tell they're different performances because of the difference in the down stroke notes.
Old 19th October 2012
  #176
nms
Lives for gear
 
nms's Avatar
The Motus have absolutely no problem syncing to external clock sources. What else would you expect from the company who made the first firewire audio interface? If you attempt a null test with the files there's significant difference between them due to slight clock drift. It goes without question that the clocking is improved. The question is, can YOU hear it and make out the night and day difference you insist should be present. I sent these files to a mastering engineer I know who owns around $20k in converters and monitored off his new Prism Dream DAC ($9k unit). I also sent him a Hilo loop. I asked him to guess which one was the Hilo loop and he picked the externally clocked 828mk2.


So.. does one of these sound night and day better than the other one? Is one of these bad and a clear example of how misleading our converter null test threads are as evident by the "bad" Motu clock?

828mk2-Internal-Vs-External-Clock
Old 19th October 2012
  #177
Lives for gear
 
Quint's Avatar
Quote:
Originally Posted by nms View Post
The Motus have absolutely no problem syncing to external clock sources. What else would you expect from the company who made the first firewire audio interface? If you attempt a null test with the files there's significant difference between them due to slight clock drift. It goes without question that the clocking is improved. The question is, can YOU hear it and make out the night and day difference you insist should be present. I sent these files to a mastering engineer I know who owns around $20k in converters and monitored off his new Prism Dream DAC ($9k unit). I also sent him a Hilo loop. I asked him to guess which one was the Hilo loop and he picked the externally clocked 828mk2.


So.. does one of these sound night and day better than the other one? Is one of these bad and a clear example of how misleading our converter null test threads are as evident by the "bad" Motu clock?

828mk2-Internal-Vs-External-Clock
I never insisted anything about the quality of the clock in the Motu vs the Hilo. I simply presented a question to you based on well documented evidence that some internal clocks respond positively to external clocks and others do not. I also most definitely did not bring up null tests or loopback threads.

The only reason I even responded to your post was to simply point out that one isolated example, whether it is even valid or not, is not necessarily grounds to make a blanket statement about all external and internal clock combinations. You base your argument, that Motu's can reliably sync to external clocks based on clock drift differences between the two scenarios, as being evidence that the Motu was able to successfully sync to the Hilo. What about jitter introduced when clocking externally? Evidence of clock drift doesn't somehow negate any detrimental effects of jitter that the Motu or any other device may experience when clocked externally. It also doesn't necessarily dictate how well an internal clock is able to overcome the effects of being connected to an inferior clock or any clock, for that matter. The way that the two devices work together is critical to the overall performance. You can't make the assumptions that you're making with any sort of consistency.

Additionally, a subjectively comparitive assumption that "the company who made the first firewire audio interface" somehow unequivocally means that they also necessarily produce interfaces with top notch internal clocks is completely indefensible.
Old 19th October 2012
  #178
Lives for gear
 
CoolColJ's Avatar
 

Quote:
Originally Posted by Quint View Post
I'm not listening on my monitors so I wouldn't even begin to try to make a sound comparison here. That being said, are you aware that the clip you used for the Hilo file is definitely a different one than what you used for the other two? Maybe you posted the wrong file on accident? You can tell they're different performances because of the difference in the down stroke notes.
It was recorded in real time from an analog synth which was playing a latched arpeggiator line. Playing a sound on either side of the stereo image.
I let the line record for enough bars so any pitch drift is evened out.
Since the synth is monophonic when playing like this, performance differences is not part of the equation.

Even then you can hear distinct tonality for each file over the whole duration of the playback. I mostly hear it as a tighter image, and more defined upper mids when clocked from the Hilo.
When internally clocked the stereo image sounds "confused" to me

Which is all clocking does, make sure the playback sample rate is stable.
So any drift with stereo files tends to smear the image and affects things like stereo decays and reverbs etc
Old 19th October 2012
  #179
Lives for gear
 
Quint's Avatar
Quote:
Originally Posted by CoolColJ View Post
It was recorded in real time from an analog synth which was playing a latched arpeggiator line. Playing a sound on either side of the stereo image.
I let the line record for enough bars so any pitch drift is evened out.
Since the synth is monophonic when playing like this, performance differences is not part of the equation.

Even then you can hear distinct tonality for each file over the whole duration of the playback. I mostly hear it as a tighter image, and more defined upper mids when clocked from the Hilo.
When internally clocked the stereo image sounds "confused" to me

Which is all clocking does, make sure the playback sample rate is stable.
So any drift with stereo files tends to smear the image and affects things like stereo decays and reverbs etc
I understand what you're saying about the perceived effects of clocking but, regardless of whether you played it or the synth did its own thing, it still resulted in pieces of music that have much larger differences between them, musically speaking, than any differences one would hope to hear between two identical tracks that only differ in the way that they were clocked. It's near impossible to make any sort of distinction in a situation like this.
Old 19th October 2012
  #180
nms
Lives for gear
 
nms's Avatar
Quint you do a lot of talking and speculation on theory but the rubber never seems to meet the road. What happened to using your ears and just listening to the 2 files I've put in front of you? There's certainly no problem with takes not being identical there.

I've seen plenty of first hand evidence of converters resolving to external clocks without degradation. Null tests, RMAA tests, files looping back sample accurate. I've also put an 828mk2 clock output under an oscilliscope. So yeah, you could say I do more than speculate or talk theory.

Come on man. I mean, wasn't your whole premise with the loopback thread that it's all meaningless & misleading without the all important clock testing? I think you're well overdue for showing you can hear a significant difference between a loopback where the Motu is sync'd to the Hilo DAC vs one where the Motu runs off its own clock independently.
New Reply Submit Thread to Facebook Facebook  Submit Thread to Twitter Twitter  Submit Thread to LinkedIn LinkedIn  Submit Thread to Google+ Google+  Submit Thread to Reddit Reddit 
 
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Similar Threads
Thread
Thread Starter / Forum
Replies
kraku / Gear Shoot-Outs / Sound File Comparisons / Audio Tests
261
esaias / Remote Possibilities in Acoustic Music and Location Recording
0

Forum Jump
Forum Jump