The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Gear for sale  Search Gearslutz Go Advanced
Good dither practices, what are yours? Dynamics Plugins
Old 11th January 2019
  #571
Motown legend
 
Bob Olhsson's Avatar
 

Rant on:

The irony is that developers SHOULD simply be handling dithering as an integral part of their processing and DAWs. Failing to do this is every bit as stupid as making bias an option for analog tape machines. Sad to say this thread makes its readers more knowledgeable than far too many developers.

Rant off
Old 11th January 2019
  #572
Quote:
Originally Posted by monomer View Post
I think you may be wrong on some of this. (and please correct me if i am)

It depends on what the internal mixing of the DAW considers to be 0dBFS.
This concerns 32 bit floats.

So, as far as i know, many DAWs cosider the value of 1.0f to be 0dbBFS.
This means that the signal below 0dBFS is only coded by the mantissa of the 32 bit floating point, which is 24 bits.
Moreover, the math that appliers to this mantissa is basically integer math.

So, if you have a 24 bit source, summed to a 32 bit float that assumes 1.0f is odBFS, and you lower the amplitude you will be truncating.

And so, in this case you do need to dither after the gain change.

If it's a 64 bit fp bus then the whole 24 bit original can fit in the mantissa without any truncation and this problem doesn't occur (or at least it will lead to an error that is waaaay down the scale).

This begs the question tho, what exact floating point value is considered 0dBFS by the various DAWs on their mixing busses?
The precision of 32bit float "1.0" is equivalent to a normalized 24bit fixed point signal. In fact it's a "bit" higher in the true sense: 25bit effective precision. It's never been 32bit in the first place, just 24bit + 1 bit (or 48bit + 1bit for 64bit floating point).


And as mentioned before, one cannot dither floating point truncation.

Ironically, even in theory, it would ask us to add severe, very high level and very non random distortion to the whole signal that we want to preserve! (dithering needs a random signal, which of course is no more the case here, and thus defeats any following logic). I'll maybe take time to create such a signal, just for everyone to hear how ugly it is!

A floating point dither would be a gross distortion of high amplitude, messing up any signal it would be mixed with, especially for a zero input. That's how nature generally responds to the illogical. This "thing" would be anything, but not random noise! So it wouldn't help anyway, as truncating anything that has structure (not noise) automatically produces new partials! i.e. the quantization distortion we want to prevent in the first place. Forget this idea, it's straight illogical voodoo. A perpetuum mobile will be easier to develop.

BTW, have you ever tried to provoke a problem by changing levels of a 32bit float signal? Thing is this "problem" practically doesn't exist except in extreme worst cases e.g. recursive systems like filters. In standard audio routing and export, this issue is largely imagined, not real.

Only fixed point can be dithered, because truncation distortion fully appears in lowest levels, if they contain tones/partials. Once you make sure there's only noise down there, no quantization distortion can ever appear: the truncation has become a fully linear process -> the wonder of dithering.

The mechanism of dither is: Truncation only affects lowest level, so we make sure these are fully random by raising the noise floor right beyond truncation level -> truncated noise remains noise, so it technically cannot distort (i.e. it can't provoke new partials that didn't exist before). Result is a truncation at zero distortion, just a higher noise floor.

Last edited by FabienTDR; 11th January 2019 at 06:51 PM..
Old 11th January 2019
  #573
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by FabienTDR View Post
And as mentioned before, one cannot dither floating point truncation.
I don't understand, is this also true if the signal stays under 1.0f?

Quote:
A floating point dither would be a gross distortion of very high amplitude, messing up any signal it would be mixed with.
I don't see this happening if the signal is below 1.0f
You'd be modulating the LSB of the mantissa, which by definitioon has the least possible influence on the value.

Quote:
BTW, try to provoke a problem by changing levels of a 32bit float signal! I hope you have a few hundred years time for it, though to wait for any distortion to appear.
Well, if it is indeed true that 0dBFS in the DAW is at 1.0f then you are not looking at a signal that uses the complete 32 bit range of the float. You're basically using just the 24 bit mantissa. That is the problem i was talking about.

Last edited by monomer; 11th January 2019 at 07:32 PM..
Old 11th January 2019
  #574
Lives for gear
 
DistortingJack's Avatar
 

Quote:
Originally Posted by monomer View Post
I don't see this happening if the signal is belof 1.0f
You'd be modulating the LSB of the mantissa, which by definitioon has the least possible influence on the value.:
This. When you dither you only change the value of the mantissa, which is noise around the –144 dBFS area relative to the signal itself, no matter what the exponent (how loud the signal is) is.
Old 11th January 2019
  #575
Quote:
Originally Posted by monomer View Post
I don't see this happening if the signal is below 1.0f
Mhhh...of course this happens. Floating point is not intuitive.

Look below, enter 0.001. then, enter 0.0001. Look what happens to the internal representation.

IEEE-754 Floating Point Converter

In use is an exponent of 2, not an exponent of 10! (maybe that's the misunderstanding?)




The problems of dithering floating point, in all detail: http://www.thewelltemperedcomputer.c...tingdither.pdf

Concluding:

Quote:
For these reasons [mentioned in the link] it is simply impossible to adequately dither a floating-point system such
that quantization error becomes exclusively quantization noise, such as can be
accomplished in a fixed-point system.

Last edited by FabienTDR; 11th January 2019 at 08:16 PM..
Old 11th January 2019
  #576
I urge anybody to try find any issue with floating point in normal workflows before wasting too much time. i.e.try to find any distortion appearing due to a lack of dither in floating point, before trying to "solve" it.

The floating point truncation itself, given sufficiently complex input (which music is), quickly becomes chaotic. i.e. it becomes nearly indistinguishable from pure randomness. Short, floating point truncation of audio signals is practically self-dithered. Very likely, there is no problem at all.

It can appear, at extremes, though. Say, pushing a pure sine through a naive, badly programmed IIR structures scaling and feeding back previous values to infinity. This can quickly accumulate measurable errors. But in regular audio routing and export work, I wonder where exactly problems hide. I can't see them.

This self dithering also happens to some extent in fixed point truncation when fed more than 4-5 sines, despite the much more banal truncation mechanism behind.

No need to get crazy about it, it's in fact very difficult to demonstrate any relevant problems with music signals and floating point truncation. Even in fixed point, it turns out to be harder than it seems once we stop looking at the pure sine (which music never is!).

Last edited by FabienTDR; 11th January 2019 at 08:29 PM..
Old 11th January 2019
  #577
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by FabienTDR View Post
Mhhh...of course this happens. Floating point is not intuitive.

Look below, enter 0.001. then, enter 0.0001. Look what happens to the internal representation.
I think i see my error. The exponent goes negative so is involved in numbers below 1.0f .
Old 11th January 2019
  #578
The worst of floating point primarily appears when trying to represent fixed point data with them. Say, financial calculations, or counting populations. But audio tries to represent an continuous signal, we always have a natural noise component, i.e. a noise floor. In that sense, continuous signals and systems are usually rather tolerant toward rounding errors (given error is symmetric). They simply produce a little bit more noise than usual.

Last edited by FabienTDR; 13th January 2019 at 03:24 PM..
Old 12th January 2019
  #579
On the topic of 64bit to 32-bit conversions: If a floating-point DAW or plugin is written in C, its arithmetic will almost certainly comply with the IEEE 754 floating point standard. The standard's default rounding is "round half to even", aka "convergent rounding". This rounding mode is good because it prevents accumulation of bias (DC offset).

More sophisticated rounding rules can employ some kind of randomization. Among these, so-called "stochastic rounding" is equivalent to (mantissa-scaled) rectangular dithering. Chrisj already discussed how this could be implimented, but I'm of the opinion that such an "improvement" would prove inaudible when finally played back through a fixed-point DAC with constant-level dither.

Noise-shaping may be applied to any rounding scheme. DSP designers are probably better-off to spend their time designing good noise shaping than on trying to improve on the default rounding performance of IEEE 754.

Readers who want more detailed view of rounding techniques can examine the Wikipedia entry on the subject.

David L. Rick
Old 12th January 2019
  #580
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by David Rick View Post
More sophisticated rounding rules can employ some kind of randomization. Among these, so-called "stochastic rounding" is equivalent to (mantissa-scaled) rectangular dithering. Chrisj already discussed how this could be implimented, but I'm of the opinion that such an "improvement" would prove inaudible when finally played back through a fixed-point DAC with constant-level dither.
No reason you can't have triangular probability density function… or indeed highpassed TPDF (Bob O's favorite, last I checked). I already said I'd do it do I have to drop everything and do it tomorrow?

I even have a good testbed for it. All you gotta do is add an arbitrarily large number to your audio data, cast it to float, and then remove the same arbitrarily large number and play it back. I can monitor the grungiest depths of floating point, because I don't have to center the audio around 0 (where floating point has its highest resolutions). I can center it around a huge DC offset, just to make the point. This is not hard, and it'll show how the floating point TPDF dither works.

I completely agree that one stage of floating point dither will be completely, absurdly inaudible played back on any real-world DAC. I'd like to join in pointing that out: we are not talking about a need to do this when rendering out your 32 bit files to send to mastering

Just so long as the corollary is ALSO heard: on many computers, including all Macs running CoreAudio, we return our work data to 32 bit EVERY time we're finished with the smallest change. I'm sure there are many audio DSP plugins out there that run entirely on floats for audio data, meaning that they return to 32 bit dozens or hundreds or thousands of times PER SAMPLE as part of their processing. It adds up, and though the obvious answer is 'so use doubles or Float64 in audio units or long double to be ruthless about it', there's gonna be a lot of stuff out there that's having massive issues with quantization, especially in the top 6 db of the audio signal below clipping… where modern music genres seem to spend all their time.

We can do a lot better. Even if 'one dose' of the cure is far too weak to notice.

(actually I'm not sure stochastic rounding is at all equivalent to rectangular dither. Dither affects far more than the corner case where you're deciding whether to round up or down from exactly one half. Maybe the testbed can show this)
Old 12th January 2019
  #581
Quote:
Originally Posted by chrisj View Post
(actually I'm not sure stochastic rounding is at all equivalent to rectangular dither. Dither affects far more than the corner case where you're deciding whether to round up or down from exactly one half. Maybe the testbed can show this)
Hi Chris,

I'm not sure whether there's a univerally agreed-on definition for what constitutes "stochastic rounding", but the authors of the Wikipedia entry define it thus:

Quote:
Stochastic rounding
Rounding ... to one of the closest straddling integers with a probability dependent on the proximity is called stochastic rounding and will give an unbiased result on average. [citation to a 2015 paper by Gupta et al. on its application in Deep Learning]
...
Stochastic rounding is a way to achieve 1-dimensional dithering.
Note that the presented formula doesn't just apply to the 0.5 case, but to all results in between resolution of the output mantissa.

I agree that you could easily extend this from rectangular to triangular dithering, but I don't see the point in trying to control noise power modulation that is happening 25 bits down. Also, if it's true that that on some platforms the 64->32 bit rounding is happening hundreds or thousands of times, then I think we clearly want to prioritize limiting the noise growth, which suggests using rectangular dither. I'm not sure Bob would still advocate triangular dither in this floating-point case, but I'll let him speak for himself.

If you actually implement some kind of floating-point "dither", you should publish an AES paper describing your algorithm. Perhaps there's an audio engineering student somewhere who's looking for a thesis project and would be willing to structure some listening tests.

David
Old 12th January 2019
  #582
Motown legend
 
Bob Olhsson's Avatar
 

The reason I use TPDF is that virtualy all modern consumer playback gear employs digital volume and tone controls. On top of that, most people's first impression of a recording will be by way of lossy coding.
Old 12th January 2019
  #583
Quote:
Originally Posted by Bob Olhsson View Post
The reason I use TPDF is that virtualy all modern consumer playback gear employs digital volume and tone controls. On top of that, most people's first impression of a recording will be by way of lossy coding.
Sad but true.

Would you mind elaborating on the difference between real-world studio-use results of Rectangular vs Triangular PDF for dithering?

This is a concept that you guys take for granted but many of us do not fully comprehend in terms of actual choice and usage at work.

Thank you Bob and everybody.-

Ezequiel Morfi | TITANIO.
Buenos Aires, Argentina.
Old 12th January 2019
  #584
Quote:
Originally Posted by FabienTDR View Post

And as mentioned before, one cannot dither floating point truncation.

Only fixed point can be dithered, because truncation distortion fully appears in lowest levels, if they contain tones/partials. Once you make sure there's only noise down there, no quantization distortion can ever appear: the truncation has become a fully linear process -> the wonder of dithering.

The mechanism of dither is: Truncation only affects lowest level, so we make sure these are fully random by raising the noise floor right beyond truncation level -> truncated noise remains noise, so it technically cannot distort (i.e. it can't provoke new partials that didn't exist before). Result is a truncation at zero distortion, just a higher noise floor.
Yet another very clear and straightforward explanation. Thank you @FabienTDR.
I don't see how this version contradicts with previous ones by @David Rick and @chrisj so it only goes to add up to a better understanding of this whole topic.

In a real world scenario I'm wondering what the aforementioned geniuses think about carelessly going back and forth between 32 and 64 bits floating point when working with a DAW, various third-party plug-ins, other audio editors and different types of file renders.

(for example: mixing inside the DAW in a 64-bit project-resolution session but running a plug-in that works at 32 bits or exporting a file to 32-bit for momentarily opening on iZotope RX or Celemony Melodyne and then re-importing back into the 64 bit environment for further processing).

Thanks to everybody!

Ezequiel Morfi | TITANIO.
Buenos Aires, Argentina.
Old 12th January 2019
  #585
Motown legend
 
Bob Olhsson's Avatar
 

I'm not an expert at all but I often can hear undithered truncation causing low-level chatter that masks the depth cues in orchestral recordings and making the sound crunchy when you apply signal processing. My understanding from what I've read is that rectangular was a better than nothing solution back when little processing power was available.

My "guru" is James Johnston who co-founded the first Usenet audio newsgroup back when he worked for Bell Labs during the '80s. JJ taught us the facts about digital audio technology while the rest of the world was awash in digital marketing hype from Sony. A great deal of it was over my head but he once quipped to me in a conversation at an AES convention "Don't even THINK about not dithering because the math is wrong!"
Old 12th January 2019
  #586
Thank you @Bob Ohlsson for such a quick reply!

I guess rectangular is better than nothing... and triangular is better than rectangular? As I see that form the most-used among dithering applications/plug-ins.

You say you can often "hear" undithered truncation but I was wondering if there was such a way that one could measure a signal/a music program and see if it has undergone truncation, dithering, dithered-truncantion, etc. Once again, from a user-perspective (user = mixing/mastering engineer), not a scientist point of view.

Thanks everybody!

Ezequiel Morfi | TITANIO
Buenos Aires, Argentina
Old 12th January 2019
  #587
Motown legend
 
Bob Olhsson's Avatar
 

I suppose there must be a way to measure it but the problem is lost information that can't be retrieved. When additional processing has caused crunchiness, 100% of the time it turned out my client hadn't dithered their 24-bit files or used the Pro Tools TDM dithered mixer. No doubt there have been plenty of times when I couldn't tell but having never been wrong when I believed there was a problem made me into a dither fanatic.
Old 12th January 2019
  #588
Quote:
Originally Posted by Bob Olhsson View Post
The reason I use TPDF is that virtualy all modern consumer playback gear employs digital volume and tone controls.
That's a completely valid reason to make TPDF dither the preferred choice for audio export. It also means that some of the carefully-crafted noise shaping curves we employed back when everything was released on CD are no longer safe to use. Those curves are only psychoacoustically optimal when heard at their originally-intended playback level.

If there's any chance that the end consumer can apply enough gain to hear the noise floor, then TPDF dither should be used because it prevents the noise power from changing in response to the audio.

My recent suggestion for rectangular dither was strictly in the context of a proposed improvement in the way we convert double-precision floating point values to single-precision floating point inside a typical DAW or plug-in chain. This is an operation that's happening 152 dB below the instantaneous signal level, and (exponent-adjusted) rectangular PDF dither (aka stochastic rounding) will suffice to convert any resulting distortion into noise, 149 dB down. If, as Chrisj posits, we do 1000 such operations in our production chain, the numerical noise will still be 119 dB below the instantaneous signal level. It's doubtful that a consumer could apply enough playback gain to make it audible. If they do, triangular dither won't help much because, in contrast to the fixed-point case, the round-off error power in floating point is inherently correlated to the signal level. But if Chris wants to add TPDF rounding as an option, the noise penalty is only another 1.5 dB. Contrary to what I implied before, noise floor growth through multiple passes will be same in both cases.

Let me be perfectly clear about my remarks here and above: They are made in the context of a discussion between DSP experts about the potential for incremental improvements in plug-in coding. Nothing I write here should change your best-practice production workflow. Take Bob's advice and use TPDF on export.

David L. Rick
Seventh String Recording
Old 13th January 2019
  #589
Motown legend
 
Bob Olhsson's Avatar
 

I'm glad you made the point about the number of calculations and noise growth. People will often take one operation out of context which is misleading.
Old 13th January 2019
  #590
Quote:
Originally Posted by Bob Olhsson View Post
I'm not an expert at all but I often can hear undithered truncation causing low-level chatter that masks the depth cues in orchestral recordings and making the sound crunchy when you apply signal processing. My understanding from what I've read is that rectangular was a better than nothing solution back when little processing power was available.
I found this type of distortion really easy to identify by listening to the original signal minus the truncated signal (dithered or not).

This distortion (which really is the exact mirror image of the distortion appearing in the main signal) becomes very apparent when the music isn't there. Simply copy the track, quantize (and maybe dither) one of them, and invert polarity. Listen to the sum (make sure both channels have exactly unity gain).

Primary problems that I found related to gating and fading, and ironically, particularly synthetic sounds that were arguably meant to sound very digital. Nice recordings largely seemed to be immune to raw truncation, even down to 12bit.

But this aside. A bit of noise really doesn't hurt. For example, it makes dialogue far more intelligible and agreeable to listen too. It seems to act like a perceptual glue. Not much noise, just a little bit.

My personal verdict is more about: Never cut the noise floor away, even in signal pauses.
Old 13th January 2019
  #591
Lives for gear
 
stinkyfingers's Avatar
 

Quote:
Originally Posted by FabienTDR View Post
My personal verdict is more about: Never cut the noise floor away, even in signal pauses.
Were you aware of the issue w/ Live “dropping” the signal in processing chains involving 3rd party plug ins ?
It was brought to attention by another user here and I was able to confirm that they are completely dropping the signal (digital black) in certain situations.
Kind of disturbing...
Old 13th January 2019
  #592
Old 13th January 2019
  #593
Lives for gear
 
stinkyfingers's Avatar
 

The thread is here...

Data Loss On Low Level Signals
Old 13th January 2019
  #594
I prepared the aforementioned experiment in form of a reaper project file:



Listen to the result of the example above applied to Marvin Gaye's "What's happening brother" (I guess distro rights are not required in this case )

https://www.tokyodawn.net/labs/public/dithertest.wav

This is raw 8bit truncation (from a 16bit CD source), output minus input, amplified by 25dB.
(The project contains two saturator plugins purely meant to provide sufficient gain for comfortable auditioning of higher bit-depth truncation errors, just in case you're wondering)


Download the project here: https://www.tokyodawn.net/labs/public/dithertest.zip

Try more modern, and more gated/faded material to get an idea of the worst case situations. What you hear IS the pure distortion + a bit of noise. So, any pattern, anything with structure, rhythm or tones appearing in the difference signal will be distortion (audible or not, its presence is out of question). This often sounds like waves hitting a cliff, or wind changing direction.

Try turning on the dither to hear its effect, and how it literally eliminates any patterns in the difference signal, if you found any(!) (when applied properly of course).

Last edited by FabienTDR; 13th January 2019 at 07:19 PM..
Old 14th January 2019
  #595
Airwindows
 
chrisj's Avatar
Quote:
Originally Posted by FabienTDR View Post
Listen to the result of the example above applied to Marvin Gaye's "What's happening brother" (I guess distro rights are not required in this case )
YouTube ContentID will getcha!

This is absolutely fascinating. You're right, there are no identifiable features in that, even though it's the residue of raw truncation at 8 bit.

I'd add to this observation, my experiences of 'double blind identifying' truncated noise. There wouldn't be identifying features in the residue of that either, but I was able to distinguish it every time by an objectionable character in the sound, until the noise was itself dithered using correct TPDF dither.

Once that's done, the residue will continue to not have recognizable features but the low-bit audio output can no longer be distinguished from the original, full-resolution noise sample. So even though 'nothing changes' in the residue, the character of the resulting quantized file changes completely.
Old 15th January 2019
  #596
This thread keeps getting more and more interesting. Thank you @Bob Ohlsson, @chrisj and @FabienTDR for pouring so much good info here.

Fabien: for your consideration for a future TDL project or Limiter6 update, we really need an upgrade to the wonderful bit-meter by Stillwell. I can't believe there's not a more modern (also, better looking) bit meter plug-in than that one nowawdays.

Ezequiel Morfi | TITANIO
Buenos Aires, Argentina.-
Old 19th January 2019
  #597
Airwindows
 
chrisj's Avatar
'K, I got it. Expect the demo Sunday unless I get so snowed in from the latest nor-easter that I have no power or internets

Showing the truncation is absolutely trivial: just add a big offset and you can make the truncation as loud as you like. It's positive offset, so the top of the wave is coarser than the bottom.

The actual dither is only medium trivial but reduces to a nice tidy three lines of code, and I set it up so it can drop into my existing plugins using the variable I was using for the noise shaping.

THAT proved alarming, now that I've got such good floating-point-truncation monitoring: the earlier form of FP noise shaping turned out to be somewhat better, though flawed. The kind that I thought was a big improvement didn't really do the job and threw a huge amount of bassy noise in there… at typically around -290 dB, so at least it wasn't doing a lot of harm. Still, embarrassing.

TPDF dither in this instance (my PaulDither, or commonly used highpassed dither that requires only one random number per sample as used by Paul Frindle) so completely and totally blows away anything I could do with 'noise shaping' that I plain gave up and determined I would dither to floating point from now on. Fancy pants tone experiments can go to 24 bit or 16 bit, since I do still like them. 32 bit float gets correct dither.

Turns out the changing amplitude is indeed a thing but absolutely doesn't matter for audio purposes. Not only because it's so faint that you wouldn't care. I was testing on sine waves, including very low frequency ones. What becomes instantly obvious is this: over low frequencies, the absence of dither leads to LOUD truncation artifacts (relative to the mantissa noisefloor) that are extremely obtrusive for their level. TPDF dithering LINEARIZES these low frequency changes producing an average output that can smoothly change, against a smooth noise background that doesn't fluctuate as much as you'd think (I did notice a more striking fluctuation testing a 64 bit version, but it seemed to still work as intended).

Regardless of the amplitude of the dither as it changes to fit the exponent, the truncation artifacts are gone. Not suppressed or buried in noise: gone. That is after all what TPDF dither does. It does it just as effectively across varying exponent amplitudes, where the truncation artifact levels also vary.

So yeah, it's going up Sunday. Open source, MIT license meaning it can go in commercial software too (not a viral license like GPL). Talk to me privately if you want not even to acknowledge that you're using the code (otherwise, a simple attribution is all you need).

Not only can you dither to 32 bit float, you should. As long as it's this easy. Then any subsequent processing, no matter what it is, won't drag out or build up those truncation artifacts. See ya Sunday
Old 19th January 2019
  #598
Lives for gear
 
stinkyfingers's Avatar
 

Quote:
Originally Posted by morfi View Post
... we really need an upgrade to the wonderful bit-meter by Stillwell. I can't believe there's not a more modern (also, better looking) bit meter plug-in than that one nowawdays.
.-
FWIW There’s a bit meter in the new SPL Hawkeye...

I don’t understand how a bit meter could be useful. What do you do with it ? I’m genuinely curious...
Old 19th January 2019
  #599
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by stinkyfingers View Post
FWIW There’s a bit meter in the new SPL Hawkeye...

I don’t understand how a bit meter could be useful. What do you do with it ? I’m genuinely curious...
You can use it to see wether some algorithm (plugin, whatnot) does put information into particular bits of a sample. You can spot truncation for instance.
Tho i'm not sure how it works out now that i have learned that floating point numbers are more complicated and spread bits all over the place...
Old 19th January 2019
  #600
Lives for gear
 
stinkyfingers's Avatar
 

So you have a bit meter and spot truncation...than what ?
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump