The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
Good dither practices, what are yours? Dynamics Plugins
Old 25th January 2017
  #181
It's part of a series which started here: Archimago's Musings: INTERNET TEST: 24-bit vs. 16-bit Audio - Can you hear the difference?

...as explained in the first sentence of the original link

Edit: Fair enough, they do not test the audibility of truncation itself. This is a pure 16bit vs 24bit test. But a very well made one. I think we'll have to roll out our own truncation audibility test in order to get sensible results about the audibility of dithered vs non dithered truncation to 16bit.
Old 25th January 2017
  #182
Airwindows
 
chrisj's Avatar
You know, when you can measure something and document what happens with it, it doesn't really matter SUPER much whether end users (or even we) can hear it on Apple earbuds (or played off YouTube).

Audibility is a separate issue. If I did a hardware output to a 1176 with all buttons in and turned it up as high as it would go, I daresay I'd find at least the 16 bit noise floor extremely audible, maybe even a 24 bit noise floor. Then I'd hear the truncation or dither. If not, add another 1176, or listen on headphones to isolate things a bit more.

Upthread is interesting. I have no idea how I manage to get PARTLY uncorrelated random noise in channels. They're quite independent: the audio unit versions are N to N and will do 5.1 or anything else you like.

Probably associated with the fractional-bit DC offset which is part of how mine sounds better as a noisefloor (which has been commented on, though it seems weird too). You can't have better or worse without 'different'.

I encourage Kazrog to treat the mono-dither issue as a bug, it's a dealbreaker for professional mastering engineers and he deserves better than to be written off for such an oversight. I know it's more CPU to generate independent dither for each channel, but don't let that stop you, Kazrog! Put out an update and we'll put this behind us!
Old 25th January 2017
  #183
Gear Maniac
 
Yuri Korzunov's Avatar
 

Quote:
Originally Posted by FabienTDR View Post
I finally found a proper ABX test. There aren't many around!

Archimago's Musings: 24-Bit vs. 16-Bit Audio Test - Part II: RESULTS & CONCLUSIONS

In short: We get a solid 50/50, not really a surprise!
It is technically impossibly compare 24 bit and 16 bit format as itself.

All playback complex work in different modes.

We can compare software/hardware implementations only.

Also proper double blind test is not home experiment.
Old 25th January 2017
  #184
Motown legend
 
Bob Olhsson's Avatar
 

My experience has been that if the audio has ever been truncated to 24 bits, the difference from further reduction becomes very subtle. Hopefully this wasn't the incompetently developed version of Audition that truncated everything to 23 bits! We ran into this when we were trying to do a demo for an AES meeting.

Just one of my experiences that left me skeptical of developers.
Old 25th January 2017
  #185
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by Bob Olhsson View Post
My experience has been that if the audio has ever been truncated to 24 bits, the difference from further reduction becomes very subtle.
This can also be expressed as, "If you have training and a monitoring system capable of dealing with really high resolution source material (for instance, open reel tape or maybe DSD) and then you try using properly dithered 24 bit PCM audio, that gets you surprisingly far in the direction of the crazy boutique stuff".



So, dithered 16 versus truncated 24 is NOT nearly so big a difference as dithered 16 versus dithered 24. It becomes all about the quietness and tone color of the noise floor.

Throwing any nondithered truncations in there (wordlength reductions) immediately throws a monkey wrench in the works, and it's a change in type of sound, not just changes in noise amplitude.

This does also affect 32 bit floating point audio (I need to make that video I was being asked for, about explaining these subjects). 32 bit float contains 24 bit truncation within it, just at less than half the amplitude of the artifacts: but they're still there. If you develop audio you should be maintaining internal busses at 64 bit floating point or better (however, going higher than that costs a lot of CPU: 64 bit is more convenient for processors to handle).
Old 25th January 2017
  #186
Motown legend
 
Bob Olhsson's Avatar
 

I'm not sure it requires that much training. Truncation removes low level information. Once it's gone, it's gone! This is about imaging, depth, etc.
Old 25th January 2017
  #187
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by Bob Olhsson View Post
I'm not sure it requires that much training. Truncation removes low level information. Once it's gone, it's gone! This is about imaging, depth, etc.
I said training (and monitoring) because you're speaking about truncation to 24 bit. I'm convinced, but we do have an industry full of people insisting that it's all indistinguishable from sufficiently high bitrate Ogg Vorbis (admittedly higher performing than mp3), meaning that even truncation to 16 bit is even more fugitive. Hence, I invoke training and monitoring.

I agree and I've got a lot of theories on what is lost and why it matters. I'm a huge fan of what Neil Young tried to do with Pono (I own one, and man is that a good sounding little cheese-wedge ). The trouble is making the case, 'this will matter to your overall listening experience' and establishing that's different from '9 out of 10 people failed the ABX test, therefore we banish your dither foolishness to the island of misfit audiophiles!'

Good dither practices, as specified in the OP, are 'handle the wordlength reductions with technical correctness' or possibly 'boutique dithers' in the right circumstances. I have one coming out in February. But the real underlying truth is 'something technically equivalent to TPDF' and decorrelating the quantization noise on both levels (first the raw quantization, then the variations in amplitude). That is measurably, technically, reproducibly correct, on a level that has nothing to do with listener error. If you asked a computer what was its good dither practices, it'd tell you to use TPDF dither whenever you reduced word length.
Old 25th January 2017
  #188
Quote:
Originally Posted by chrisj View Post
You know, when you can measure something and document what happens with it, it doesn't really matter SUPER much whether end users (or even we) can hear it on Apple earbuds (or played off YouTube).
Analyzing a digitally generated sine wave moving through the system is one thing, but can you really demonstrate much with regard to music signals? i.e. full mixes? From my experience, all of them end up containing a relatively huge amount of random noise (relative to the 16 bit LSB) long before they hit any truncation. All ADs also dither on the way in, for obvious reasons, so a lack of noise is more of an exceptional case.

Dithering is fine. But it's not that bad either if you forget it here and there, or do it once too much. The effect of "improper truncation" is very real, but quickly becomes irrelevant in naturally noisy environment and increasing word-length. At 32bit floating point to 24bit fixed or even 16bit, it's minuscule. In case of music, you most probably wouldn't even be able to measure any drawback either (again, because the music likely already contained noise higher than the target LSB, so zero nonlinearites have been provoked!).

Dithering is important when the signal doesn't contain natural noise above the target LSB. In case of 16bit and audio signals, it's quite rare (ITB minimal techno under lab conditions maybe? But certainly not in a recording situation).

I'm not particularly biased toward dithering or not. I just see no reason to make it more than what it is: A small gear in the clockwork, each time you truncate to fixed. Not super relevant. The right soda for the vocalist has far more effect on the end result.

Last edited by FabienTDR; 25th January 2017 at 11:09 PM..
Old 25th January 2017
  #189
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by FabienTDR View Post
Dithering is important when the signal doesn't contain natural noise above the target LSB.
On the contrary: back in the day, I thought I would do an ABX test and demonstrate how lame low-bit audio is, using pure noise. I compared 24-bit noise with something like 4-bit: I forget.

I do remember that I effortlessly ABXed the heck out of it and got 10 out of 10, anytime I wanted, and noticed an unpleasant tonal coloration and edginess that was my 'tell'… on raw noise, just noise.

Then someone suggested I dither the noise. So I duly TPDF dithered the 24 bit noise, and tried again.

So much for my ability to distinguish between the two. Noise ain't just noise.

To correctly reproduce your analog noise floor AND whatever information is buried in it, you have to dither ('add noise to the noise'), correctly. Then it works.
Old 26th January 2017
  #190
Motown legend
 
Bob Olhsson's Avatar
 

Dither sounds like noise but noise doesn't function as dither. We can hear into the noise floor when the audio is dithered.
Old 26th January 2017
  #191
Lives for gear
 
Arksun's Avatar
Quote:
Originally Posted by chrisj View Post
So, dithered 16 versus truncated 24 is NOT nearly so big a difference as dithered 16 versus dithered 24. It becomes all about the quietness and tone color of the noise floor.
Are you seriously suggesting that 24-bit dithered sounds that much better than 24-bit truncated, a 'big' difference? Perhaps if we're talking about an audio recording that peaks at -90dbfs at the most and has to be boosted afterwards, but for anything else recorded to a reasonable level you're just not going to hear the difference at all, because all the truncation distortion is below the noise floor of the DA converter, and human hearing for that matter.

Perhaps if you're someone listening to the worlds most dynamic piece of classical music ever written @24bit and like the loud parts of the track to be playing at 140db out the speakers whilst listening, then you might hear it in the quietest fadeout parts, although you'd be so deafened by the loud parts of the track your ears ringing would mask any distortion in the quietest bits

An iterative process is different of course, but thats why plugins tend to work at 64-bit float internally to stop any buildup over multiple processes.
Old 26th January 2017
  #192
Gear Addict
 

When bouncing out a final master, it's been argued that dither isn't necessary because truncating to 24 bit produces distortion below the noise floor that any D/A converter is capable of reproducing, and that makes sense for sure.

But it'd be interesting to test purely the cumulative effect that's also been discussed here - dither accumulating at 3db, versus truncation noise accumulating at 6db, across multiple tracks in a mix.

Seems to me there might still be a case for dithering to 24 bit if the file in question is being used in a mix.
Old 26th January 2017
  #193
Motown legend
 
Bob Olhsson's Avatar
 

Why intentionally distort audio? Distortion accumulates and eventually turns crunchy and "digital" sounding audio. Almost every music recording is a serious investment in time and money.

This is basic DSP math developed by Claude Shannon and Harry Nyquist at Bell Labs.
Old 26th January 2017
  #194
Motown legend
 
Bob Olhsson's Avatar
 

Quote:
Originally Posted by mustgroove View Post
When bouncing out a final master, it's been argued that dither isn't necessary because truncating to 24 bit produces distortion below the noise floor that any D/A converter is capable of reproducing, and that makes sense for sure...
It sounds plausible if you don't understand how noise and perceptual masking work.
Old 26th January 2017
  #195
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by Bob Olhsson View Post
It sounds plausible if you don't understand how noise and perceptual masking work.
Yeah. Our ears are GREAT at picking out the stealthy noise of a tiger sneaking closer, buried in the sound of the wind through the trees. Little artifacts, little correlated noises, jump out for us.

Consistently? Oh hell no. But we're wired to seize upon any little detail that's out of place. Truncation artifacts are just such details: quite unnatural, and they can really make a distinctly not-noise sound, right where our ears are most sensitive. If all truncation artifacts hit above 10K or below 100 hz, we'd never know or care about 'em. They're like splinters in your sock: you won't notice for a dozen steps and then poink!
Old 26th January 2017
  #196
Lives for gear
 
Arksun's Avatar
Quote:
Originally Posted by chrisj View Post
Yeah. Our ears are GREAT at picking out the stealthy noise of a tiger sneaking closer, buried in the sound of the wind through the trees. Little artifacts, little correlated noises, jump out for us.
Absolutely, a gentle breeze amongst the trees is very quiet, relatively speaking they're pretty close. But 24-bit truncation distortion is like trying to pick out the sound of the tiger sneaking closer with a pair of speakers blasting out music at 120db in that jungle. Good luck with that
Old 26th January 2017
  #197
Lives for gear
 
JP__'s Avatar
 

While mastering I mostly receive mixes that were either dithered or truncated to 24bit (or even 16bit in rare cases). And while processing the distortion on the truncated mixes can become more obvious (I have done a lot if tests where the distortion becomes clearly audible even through a quite high noise floor after feeded the analog chain for example), dithered files are staying much more Inconspicuous through the process. Its also simply music depended how obvious distortion will come out.
This isnt really a topic where simplified theory works well, its sometimes about complex intertactions like Bob had said many, many times. Just go out and do your own tests. Being theoretical sceptical is just to easy. You will surprised, maybe. Like the theoretical well educated digital audio developers to whom I have presented some of those tests...
Old 26th January 2017
  #198
Never bothered with dithering until I started reading this thread.

I have done some tests and boy, what a difference! By correctly implementing dithering I was able to take away some of the edginess from my mixes, making them sound a closer to my ideal.

This is all very true and very tangible.


It's been mentioned that in some genres truncation is used in creative ways.
I'm fine with that.
If say the production I'm working on required some extra degree of edginess well I would consider not to dither.

After all, throughout the course of music history, we've witnessed many fine examples of creative decisions considered technically not correct that moved our hearts and and made us dance to the beat.

Cheers,
Andy
Old 26th January 2017
  #199
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by andyisdead View Post
It's been mentioned that in some genres truncation is used in creative ways.
I'm fine with that.
If say the production I'm working on required some extra degree of edginess well I would consider not to dither.
I've said things like that. Bear in mind that what I'm thinking of is old Akai samplers…

If you want elements (like rhythmic elements) truncated so that they can play louder without you getting a fix on them (allowing for say, a rap, to draw more attention) then 12 bit or so, even 8, might be where you'd truncate. Or 16 bit with some additional doctoring to the samples just to get the beat a little flatter.

I'm not sure there's any possible purpose to 24 bit truncation. Just dither that, same with most 16 bit (even in modern genres). Truncation as an effect ought to be 6-14 bits, possibly with some analog modeling to soften the texture you get.
Old 26th January 2017
  #200
Quote:
Originally Posted by chrisj View Post
I've said things like that. Bear in mind that what I'm thinking of is old Akai samplers…

If you want elements (like rhythmic elements) truncated so that they can play louder without you getting a fix on them (allowing for say, a rap, to draw more attention) then 12 bit or so, even 8, might be where you'd truncate. Or 16 bit with some additional doctoring to the samples just to get the beat a little flatter.

I'm not sure there's any possible purpose to 24 bit truncation. Just dither that, same with most 16 bit (even in modern genres). Truncation as an effect ought to be 6-14 bits, possibly with some analog modeling to soften the texture you get.
Gotcha!
Old 26th January 2017
  #201
Lives for gear
 
JulenJVM's Avatar
Quote:
Originally Posted by Bob Olhsson View Post
Dither sounds like noise but noise doesn't function as dither. We can hear into the noise floor when the audio is dithered.
And this why what matters about truncation distortion is what it takes away from the mix in terms of small details, rather than whether it's audible or not. The distortion interacts with the noise floor and affects the mix in my humble experience, everything sounds better in digital when dithered.

Also, we can't hear DC Offset, but we take care to remove it because we known it's good to do so. Why not do the same with dither, and at least trust the theory behind it?
Old 26th January 2017
  #202
Lives for gear
 
JP__'s Avatar
 

Quote:
Originally Posted by JulenJVM View Post
Also, we can't hear DC Offset, but we take care to remove it because we known it's good to do so. Why not do the same with dither, and at least trust the theory behind it?
I dont really share your thoughts here. Theres no free lunch in audio and theres no way to remove DC without contracting the audio. I would never do such a step without real necessity.

Its never a good idea to "trust" anyone or any theory when working with audio. So all sceptics regarding dithering is healthy in a way, but it should lead ppl to do their own experiences and not to believe in some simplified theory (hearing threshold, noisefloor etc...).
Same goes for DC removal.
Old 26th January 2017
  #203
Lives for gear
 

Quote:
Originally Posted by mustgroove View Post
But it'd be interesting to test purely the cumulative effect that's also been discussed here - dither accumulating at 3db, versus truncation noise accumulating at 6db, across multiple tracks in a mix.

Only with the first two instances of added noise you get +3dB, as explained earlier. After that the noise gain decreases significantly for each "pass".

Also it's a bit simpified (possibly erroneous) I think to say that truncation distortion increase with 6dB for every instance of truncation.
Old 26th January 2017
  #204
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by JP__ View Post
I dont really share your thoughts here. Theres no free lunch in audio and theres no way to remove DC without contracting the audio. I would never do such a step without real necessity.
Contracting how? I bet you I can do it. The catch is, I would do it by simply adding a constant… so you might get a pop cutting the track in and out, it'd have to line up perfectly with the DC-offset audio.

However, it'd be a sonically perfect fix. It would just be impossible to automatically remove the DC offset. You'd have to know in advance what you're adjusting for.
Old 26th January 2017
  #205
Gear Addict
 
Noise Commander's Avatar
 

I'm recording through a Aurora 16.
I overdub guitars, vocals, bass etc. and process EVERYTHING in Logic X.

I only dither when bouncing a 16 bit file.

Should I put a dither plugin on the last insert of the master bus?

Let's I have audio channel with an EQ in the first insert of the Logic channel, a compressor in the second insert.
Does Logic stay 32 bit float through out the whole chain of plugins inserted on that channel?
Or does is truncate to 24 bit in between every plugin?

Do I have to put a dithering plugin in between plugin insert slots?

Do I have to put a dithering plugin in the last slot of the mastering bus while listening/mixing only? And when I'm bouncing to 24 bit for mastering?

Please help me in my specific case.
Is there any info on how logic does dithering internally?
Old 27th January 2017
  #206
Lives for gear
 
JP__'s Avatar
 

Quote:
Originally Posted by chrisj View Post
Contracting how? I bet you I can do it. The catch is, I would do it by simply adding a constant… so you might get a pop cutting the track in and out, it'd have to line up perfectly with the DC-offset audio.

However, it'd be a sonically perfect fix. It would just be impossible to automatically remove the DC offset. You'd have to know in advance what you're adjusting for.
A DC remover is a lowcut filter which just alters the phase. Can work or not, we should not decide the use by numbers then by ear.

Last edited by JP__; 27th January 2017 at 04:09 PM..
Old 27th January 2017
  #207
Lives for gear
 
Arksun's Avatar
Quote:
Originally Posted by Noise Commander View Post
Let's I have audio channel with an EQ in the first insert of the Logic channel, a compressor in the second insert.
Does Logic stay 32 bit float through out the whole chain of plugins inserted on that channel?
Or does is truncate to 24 bit in between every plugin?
It stays 32-bit float throughout the mix engine, or 64-bit float now actually, I think the latest update to Logic X bumped up the mixer engine to 64-bit.
Old 27th January 2017
  #208
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by Arksun View Post
It stays 32-bit float throughout the mix engine, or 64-bit float now actually, I think the latest update to Logic X bumped up the mixer engine to 64-bit.
Technically, to use CoreAudio (and plugins) it must revert to 32-bit float between each plugin.

Their use of 64-bit in mixing I heartily endorse, but it's no different than plugins doing their internal calculations on a double precision buss. We can pass around floats by way of routing audio, but Logic had no business doing all their mixing math just with floats: always should have been doubles, even when the audio streams are floats. I've been using doubles internal to plugins since 2007.

Heck, if there were good ways to dither to floats (32 bit), we'd never need anything more as floats are always less than half the quantization of 24 bit fixed point for any signal not clipping (and get better, as the volume drops: the quantization's directly tied to how loud the sample is). Since there isn't a good way to dither to floats, using 64 bit makes the quantization much smaller for any given level.

That's just arbitrary, by the way. You could have a 64 bit float with worse quantization than 16 bit fixed point if you set it up that way: you'd pretty much be using it to record numbers much closer to infinity or to zero, so the granularity in the range of audio samples could be worse. You could have a 32 bit float that was better optimized for audio by using a much smaller exponent, so you'd have say +40 dbFS headroom (rather than four million dB) and it'd give up on near-silent audio much quicker.

What we've got is what general purpose computing uses, so our 32 bit float is better than 24 bit in the audio range but with some limitations, and then it also offers headroom (and ability to do faint noises) way way beyond anything we practically need. 64 bit is the same general purpose thing, but it expands not only in 'ultimate limits' but also in terms of the granularity within the ranges we use.
Old 27th January 2017
  #209
Lives for gear
 
Arksun's Avatar
Quote:
Originally Posted by chrisj View Post
Technically, to use CoreAudio (and plugins) it must revert to 32-bit float between each plugin.
As an API surely this only affects how the main Logic software sends the data through the operating system to the audio interface? ie as long as its the logic software processing the bits internally, there's no reason why it has to revert to 32-bit between each plugin, unless the plugins themselves force it down to 32-bit as a matter of course.

Either way it doesn't matter, going back to his question of dithering after every plugin, 32-bit float absolutely does not need dither, to suggest otherwise is pretty insane.

And I still maintain dithering to 24-bit for any decent level signal is inaudible compared to just raw truncation, only if its a super low level signal and you massively boost it afterwards can it become audible. I would so love to blind test someone that claims they can hear the difference between a final mixdown rendered to 24-bit wave vs 24bit + dither, I think it would be a real eye opener for them
Old 27th January 2017
  #210
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by Arksun View Post
As an API surely this only affects how the main Logic software sends the data through the operating system to the audio interface? ie as long as its the logic software processing the bits internally, there's no reason why it has to revert to 32-bit between each plugin, unless the plugins themselves force it down to 32-bit as a matter of course.

Either way it doesn't matter, going back to his question of dithering after every plugin, 32-bit float absolutely does not need dither, to suggest otherwise is pretty insane.

And I still maintain dithering to 24-bit for any decent level signal is inaudible compared to just raw truncation, only if its a super low level signal and you massively boost it afterwards can it become audible. I would so love to blind test someone that claims they can hear the difference between a final mixdown rendered to 24-bit wave vs 24bit + dither, I think it would be a real eye opener for them
The plugins do just that. CoreAudio has you do this:
const Float32 *inSourceP, // The audio sample input buffer.
Float32 *inDestP, // The audio sample output buffer.
UInt32 inSamplesToProcess, // The number of samples in the input buffer.
UInt32 inNumChannels, // The number of input channels. This is always equal to 1
// because there is always one kernel object instantiated
// per channel of audio.

VST offers a 'doubleReplacing' mode where you can run a 64-bit buss. CoreAudio doesn't do that.

I'm going to respectfully contradict you completely regarding 32-bit float not needing dither, and I'm not suggesting, I'm stating it loud and clear. We're not talking about listening tests here, we're talking about technical correctness (and professional listeners with very pricey gear who have preferences for using dither at levels you consider unnecessary: Bob Ohlsson is not the only one, and all these guys seem to have shockingly high-performance systems, as they are MEs.)

32 bit float will generate truncation artifacts UP TO about half (a quarter?) the amplitude of 24 bit fixed point, at sample amplitudes using the highest loudnesses before clipping. As amplitudes decrease, the exponent kicks in, and the truncation artifacts get progressively quieter: they're always a factor of how loud that sample is, so they quickly drop down to insignificance, but it's surprisingly close to 24 bit fixed at the loudest samples.

Amusingly, if you count clipped samples, you can get truncation artifacts WAY louder than even 16 bit truncation. If you take a sample that's sufficiently above clipping, and don't scale it, the truncation could be louder than the entire audio range: equivalent to one bit audio. Of course, once you scale it all down again then your artifacts are scaled down too, and become quieter than 24 bit fixed again

BUT that does mean that even if you exploit the range of floating point, handling wildly over-0dbFS samples without clipping, you'll continue to get the same degradation at every single math operation at 32 bit. It'll always be scaled to the amplitude of the sample, so quiet stuff brought up is just as degraded as loud stuff brought down. The higher amplitude samples get worn away first, and quiet stuff in the context of loud samples will always hang on to more quality in a relative sense.

This is because the mantissa (the part that's scaled up and down to fit) is 23 bits. It is literally a fixed point number that's always between 0.5 and 1, being scaled by the exponent. Since it's fixed point it generates truncation like any fixed point representation, and as such it needs dither like any fixed point representation.

People just don't do it, because (a) it's a pain and (b) like you they're convinced that since it's floating point it's infinite resolution.

No no no, it is most certainly not. (64 bit float has 52 bits of mantissa, which for our purposes is basically infinite enough)

Think of it like the internal-processing version of what we already have in output formats. Handled really carefully, 16 bit dithered is damn close to 'enough' but it can break down easy when you do it wrong. 24 bit dithered (like 64 bit float for internal processing) is infinite enough that, assuming it is properly dithered, and ignoring sample rate which is a separate issue, I think it's beyond serious fault no matter who you are.

I do think you have to dither 24 bit correctly if you intend to pass muster with EVERY possible listener. People hear in very different ways, and some have full-tilt-bozo amazing playback systems and a willingness to conduct extended listening. We don't see either of those things in typical blind testing scenarios: in fact blind testing outright denies extended listening as a matter of course.
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump