Someone PMed me to post this test, and since I was curious myself, here it is!
This is a carefully put-together scientific aural A/B comparison test with the goal of removing as much bias as possible in a forum such as this. Some of you know I'm working on a proprietary comparison "application," which is much more scientifically relevant. But in the meantime, I'll post tests like this every so often if people continue to have interest. This is a "preference" test, so please state your preference per group.
There are reasons behind the structure of this test and its files, but every attempt has been made to allow the subject to "PASS," not "fail." If you do "fail" the test, it is
very likely that you do not have a preference in this instance,
even though you may think you do!. There's more to this test than meets the eye (and ears). Unfortunately, I don't have much time to defend science and the ensuing arguments toward tests such as this.
In order to provide any meaningful results, either on an individual or group level, you must compare all groups that are posted in the test. In this case there are ten [10] groups of short samples, twenty [20] in all.
In a nutshell, in order for you to actually have a preference with any real-world relevance, you must consistently choose the same device
at least 9 out of 10 times. In other words, if you choose the same device 8 out of 10 times or less, that is below the acceptable confidence level of 95%. To put it another way, if you don't choose the same device at least 9 out of 10 times, it is likely that you are just guessing. However, nothing is absolutely certain. As I said before in another thread, this test does not
prove anything.
Finally, if you truly are not hearing differences (even though you may think you are), it is likely that you will "flip flop" between the devices about 50% of the time.
You may record what you believe you are hearing between the samples. For example, "Group 1, Sample A sounds thicker than Sample B, so I think it's the hardware." Keep in mind that you are only comparing samples per group. In other words, you're not comparing one sample in Group 1 to another sample in Group 2. You must compare only two samples at a time per group.
Here's an example:
G1_Sample_A: Nebula, because it sounds strident.
G1_Sample_B: This is definitely the hardware because it's easy on the ears.
or...
G2_Sample_A: Nebula EQ
G2_Sample_B: Hardware EQ
Not valid (groups aren't the same):
G3_Sample_A: Hardware EQ
G4_Sample_B: Nebula EQ
I've found the best way to compare these samples and to improve the accuracy is to line the samples up one group at a time in the timeline of your DAW. So, the next group of samples will not fall below the first group... the next group will begin shortly AFTER the first group, and so on.
For each group, continually loop the samples so that they will continually play with no gaps, with ONE sample muted. Then, SOLO the muted sample in order to perform a seemless switch between the two. Keep going back and forth until you make a decision
for that group. Record your findings.
This is a stacked test of multiple tracks, where each track has been treated with quite a bit of EQ, relatively speaking. Each version was mixed down, so that one sample in a group is a mix containing only the hardware EQ, and the other sample in the same group is a mix containing only the Nebula EQ.
I will post an encrypted key before I reveal the results. Have fun!
AlexB_1_of_3.zip
AlexB_2_of_3.zip
AlexB_3_of_3.zip
EDIT ------------------ New 96K Files Added ----------------------
This is the same type of test with ten groups, but they are randomized again. Please extract all files to one directory and see above for instructions.
AlexB_1_of_5_96K.zip
AlexB_2_of_5_96K.zip
AlexB_3_of_5_96K.zip
AlexB_4_of_5_96K.zip
AlexB_5_of_5_96K.zip