Evening all, been reading about DSD and wouldn't mind if anyone could help me get my head around the concept of the amplitude being represented by 1-bit instead of 16 or 24.
As I understand it, with PCM, after the amplitude level is read by the system it is then decimated so that it can be quantised to one of 16,777,216 steps provided by a 24-bit for example. DSD does no decimation but rather keeps the original value of the amplitude which is represents as a 1 bit word. Is this correct?
Also, if this is the case, was the initial purpose of the decimation in PCM because there would have been an infinite amount if amplitude values unless it was quantised?
Thanks in advance,
RG
btw. didnt want to post this in the recent DSD thread as it was too heated for a dumb question like this!! Ive also seen the DSD docs from Sony etc. but their terms aren't as simple as I'd like them.