The No.1 Website for Pro Audio
 All  This Thread  Reviews  Gear Database  Gear for sale     Latest  Trending
192khz, 96khz, 48khz. I hear the difference.
Old 4 weeks ago
  #661
Lives for gear
 
IanBSC's Avatar
Quote:
Originally Posted by sax512 View Post
I guess it's only by pure luck, then, that it also happens to be just right to encode all and only what we need for the human hear, with enough room to spare so that, after learning from some initial fu*kery with analog anti-aliasing filters, we do get transparent sound from it (if we're not trying to upsell the customer for the next ultra-mega-great sample rate converter).
Perhaps you could clue the rest of us in on a particular recording or playback chain that will give us transparent sound from 16/44.1khz?
Old 4 weeks ago
  #662
Gear Maniac
 

Quote:
Originally Posted by sax512 View Post
Says anyone who doesn't want to take the time to learn the science behind the subject at hand.

And without the science there wouldn't be any digital audio, by the way.
It's not like people started building circuitry at random and went 'Hmm.. this sounds close enough to an analog audio chain. Let's figure out some science to explain it..'
And thats why for one I thank you for the info in this thread
Im just a consumer who remembers a bit of his university course on Fourier transform and that reads a lot about sound synthesis and sound in general, so I did not understand everything, but I appreciate what I understood
Yes the more you learn the more you may have questions, but its also true that the more you learn the more you understand, hopefully.
Old 4 weeks ago
  #663
Gear Maniac
 

Quote:
Originally Posted by norfolk martin View Post
By example, I have not yet learned to quit getting involved in these debates!
I think thats one of those “multiple coincident experiences” you talked about
You are obviously not alone ...
Old 4 weeks ago
  #664
Gear Maniac
 

Quote:
Originally Posted by Space1999 View Post
There doesn’t need to be any scientific debate about this.

Pat
I sorry ... Im too honnest for my own good ...
I felt a bit dumb trying to follow all the technical stuff (still dunno what a tap is) ... but sometimes you really make me feel better about myself
Old 4 weeks ago
  #665
Gear Maniac
 

Quote:
Originally Posted by FatB View Post
If my post was misleading Im sorry about that. But if you can bare with me, if 0 info is lost and its an audio signal (lets forget synths for a bit) so no perfect square wave is present in the input, then the raw output of the DA conversion should have no stair effect nor square part so why do you need a filter ? I always thought the filter was to smooth the “stair” effect introduced by sampling and that with a good filter you could recreate a close enough signal that no noticeable (to the human ear) difference was present. So in a way I thought that you lost “some” info, but that this info was so small in the context of audio that it was negligeable.
And to me smoothing the squares is the equivalent of saying filtering the top harmonics.

But I see that you did a s**t load of explanation so I wont take it personally if u dont answer
I made these graphs hoping it will help people understand.
In the air (1), you generally have frequency content exceeding 20 kHz (usually not much, but it's there).
Your ears can only hear the part of it that is between 20 Hz and 20 kHz (2).
When you are recreating that part (the one that looks like a rectangle), to your ear it is no different than when it is subject to all the frequency content (including the little half circle looking things).
(2) is what you NEED to encode and reproduce.
If we don't agree on this, there really is no point in reading further.

THE THEORY:
Believe it or not, sampling actually ADDS stuff to the original signal. It creates, when you look at the signal in the frequency realm, infinite replicas of the original spectra, centered at multiples of the sample rate. So, for example, when you sample a 10 Khz sine wave at 44.1, you actually also create sine waves at 34.1, 54.1, 78.2, 98.2.etc...
This is shown in graphs (3) and (5) for 44.1 kHz and 96 kHz sample rates.
If you don't filter the original signal (1) with an analog filter before sampling, you get part of the spectra mirrored back in the audio band (aliasing), as shown in (3).
Assuming you have filtered the content above Fc/2 out with the ADC anti-aliasing filter, so that the spectra between 0 and Fc/2 is 'clean', at the out of the ADC you can simply filter out the content above Fc/2 and get rid of the replicas.

One can see how 96 kHz (6) retains the actual frequency content in (1), and 44.1 only retains the content of interest to your ears (2).



THE PRACTICE:
There are a couple assumptions in the theory that don't apply in reality:
1. You can't filter out enough content of (1) above Fc/2 with an analog anti-aliasing filter when you want to sample at Fc = 44.1 kHz, without messing up amplitude and phase of the audio band.
2. You can't filter out enough of the replicas at the output stage of the DAC with an analog filter, without messing up amplitude and phase of the audio band

So the actual sampling in the ADC is done at very high sample rate (~Mhz), so that the replicas are spaced apart a lot more than if you were to sample straight at the final sample rate (7).
This makes it so that the analog anti-aliasing filters at the ADC input stage can be done without affecting the amplitude or the phase of the audio band.
This sampling is quantized at low bit resolution, which adds an error to the signal represented by the perfect sample values. This error has frequency content well above the hearing range, and will be filtered out when we finally decimate to the final sample rates, shown in (8) and (9).
At the input of the decimator is a digital filter (not shown) operating at the MHz sample rate, which can filter all content above Fc/2 very effectively, even with the small transition band that 44.1 kHz sample rate commands.
After that, the decimator takes the sample rate to the final value Fc (and that in and of itself makes the replicas at multiples of Fc reappear, so to speak).

So we have to get rid of them...
In the DAC (10) we oversample the 44.1 kHz rate to x4 (just for the sake of explanation), which leaves the replicas centered at 88.2 and 132.3 etc. there, but takes the sampling rate up to 176.4. This makes it is easier to design a filter (implied in (10)) that cuts out the replicas, so that the analog low pass filter at the final stage can be made transparent in the audio band, since it can have a much wider slope.

Oversampling is not the same as recreating the signal in (1) from (2).
That is to say, once you have gotten rid of the circle looking things, you can't put them back, no matter how much you oversample.
That can't be done. I saw some comments above implying this, so I just wanted to get it off the table.
That's OK, though. We want them gone.


THE DEVIATION FROM IDEALITY:
Where is error mainly introduced in all this?

A. Filters:
with lower sample rates in the decimator stage, the transition band from pass to no-pass is limited. This calls for more time necessary to achieve the same no-pass value.
The no-pass is actually a very high abatement, but not perfect. So the content that is not completely deleted is mirrored below Fc/2, as aliasing. It is stuff that passes the digital filter in (8) and (9).
This is an error with power well below the hearing capability.
But with some filter designs you MIGHT also get a -3 dB at 20 kHz. With some others you don't. Between aliasing and non flatness in the audio band, non flatness is the more offensive error (higher in power), but it can also be avoided (just as aliasing can be made better, at the expense of more processing time).

B. IMD:
The main issue though is when the half circle looking things are fed into an amp and to a speaker.
While in (9) we are still in the ADC, and the IMD is not that bad, when we feed that content to a speaker out of the DAC things can get bad.
That extra content creates an additional error which is hundreds of times higher in power than any non flatness (let alone aliasing). This is IMD (InterModular Distortion).
We're not even taking into account the extra IMD that happens inside the ADC/DAC when it has to handle the extra frequency content, because it is well weaker in power than the IMD caused by the speakers.

Additional error:
Sampling is one thing, and it creates the extra replicas as shown in the graphs. But a sampled signal is still an analog signal (that's why we can show its spectra).
When we quantize, we add noise due to the fact that we can't encode all the possible values, but only a finite number of them.
The 'signal' out of this stage is the digital signal, and it's not even a signal in the traditional sense attributed to the word.
We can calculate the Z transform of it (which the Fourier transform is a particular case of), but it's just numbers sitting in a memory.
Anyway, what is the error introduced by quantization? Its power has a TOTAL value of ~-96 dB full scale. That is below the human capability of being heard on top of musical content, unless that content is recorded at extremely low level (lower than you have ever experienced with any CD).
But in the recording stage it is nice to encode with 24 bits, if a lot of signal processing is needed (and it usually is, with non purist type of recordings).

Speaking of purist type of recordings, if somebody is interested in what else I invest my time badly in, this is another battle against windmills I have embarked on a while ago.
A better binaural microphone

Now I have moved on to speaker design (because one can't berate them without attempting to make them better. That's just not a nice thing to do).
Attached Thumbnails
192khz, 96khz, 48khz.  I hear the difference.-graphs.jpg  
Old 4 weeks ago
  #666
Lives for gear
 

Quote:
Originally Posted by sax512 View Post
Yes. IMD, just to name one (possibly the worst).
Did anybody mention it, yet?

Did you break the news to everyone tracking over at the Capitol tower studios?
Old 4 weeks ago
  #667
I've been following this thread for a while... more interested at certain times, yawning at other times. But I can't walk away.

I do know that mostly what I hear negatively in a lot of music from the digital world is:

1) over-hyped EQ and bad EQ choices;
2) immature use of compression, and having no idea how to set attack and release times;
3) overuse of distortion, saturation and hype effects;
4) bad balances;
5) more bad compression techniques;
6) crappy songs;
7) even more bad use/over-use of compression.

Stuff that would sound bad at any sample rate, perfect waveform reconstruction or not. The subtle differences between 48k and 96k (if any) aren't even close to being the real problems compared to the above with most of what I'm hearing being created today. Worry more about the things above first... (Yeah, I know it's all subjective... GS is a place for these intellectual discussions, etc. Yawn again.)

And I'd love to hear some original music from some of those leading the charge on these arguments. Convince us that any of this really matters by posting some actual music that proves your point(s). Upload some files that clearly display that XX plugin sounds noticeably better at 96k... or show us that 48k and 192k null out in the audible range... whatever.

I'm tired and cranky. Yawn one more time. Goodnight.
Old 4 weeks ago
  #668
Lives for gear
 
esldude's Avatar
Quote:
Originally Posted by sax512 View Post
He gives ample margin, but he states many times that he is against 192 kHz.
That's the main take away from what he says.
Why he thinks we should sample at 60 kHz is not clear to me, but I will attempt an explanation below. It is great to have ample margin to work with, but from an engineering point of view we need to make do with what we have, and the question is 'can you extract ALL the info you need from a 44.1 sampled signal?'. The answer is yes.

Also, he mentioned right in this very thread that the reason why he considered wanting to reproduce signal above 20 kHz is because some study says it can be heard. There's studies of all sorts of things, and I believe the results of that one haven't been replicated.
One of the links shared above (https://people.xiph.org/~xiphmont/demo/neil-young.html) explains how difficult it is to avoid false positives when dealing with high frequency content in ABX tests. That, I think, is the reason why he would want frequencies up to 30 kHz.
Dan Lavry is a master in what he does, but sometimes, especially when you are a public figure, you contain yourself to your field of expertise. I'm sure he is aware of the fact that the human ear doesn't even have the sensors to detect frequencies above 20 kHz, but put yourself in his shoes. Would you want to have to combat the hordes of super-man hearing gifted people coming from all directions, or make them happy to get them out of the way and focus on the real issue, which is higher sample rates cause distortion?
The reason Lavry thinks 60 khz would do is related to ideas of James Johnston who is a well known researcher with formerly Bell Labs. There are some small number (1 or 2%) of young adults who can hear some high level tones to 23-25 khz. If one wishes to have a system capable of total fidelity to any adult human you need 25 khz response. In addition back in the days when some of the filtering was analog he wanted a wider transition band between highest frequency and cut-off so the filtering could be made that didn't color the sound. So it might be overkill for 99% of people, but to be totally blameless Mr. Johnston suggested 64 khz sample rates, flat response to 25 khz and the transition band could be 7 khz wide.

If you use 88 or 96 rates, since 64 khz never became a standard rate, going ahead with rolling off at 25 or 30 khz is fine and dandy.

The chance you can hear a difference even at 48 khz vs more with music is very small, and would be a very, very minor perceived difference. That is assuming good working digital gear. But of course you have short cuts like half band filtering, and various 'better' sounding filters that sound different via slightly altering upper frequency response or aliasing a bit. These might better be termed broken or bent digital systems.
Old 4 weeks ago
  #669
Gear Addict
 
Mantik's Avatar
What ever works for you.

Longtime ableton user. I switched from 44khz in ableton to 96khz in reaper and yes I think the production / mix has a lot cleaner timing, phase dynamic and "breath" now. I only use ableton for creating VSTi-lines now, because midi is so easy there.


I know, I know the science. But I am not interested. I go with what makes me feel the best. what keeps me in the mood. d I am a human beeing with subjectively and bias. Its true when I feel it.
Old 4 weeks ago
  #670
Lives for gear
 
esldude's Avatar
Quote:
Originally Posted by DistortingJack View Post
It wasn't. 16 bit 44.1 kHz was chosen so that Sony could fit Beethoven's 9th Symphony on a single CD. I am not joking.
Well there is some truth to that in regards to the physical size of the CD.

Among several reasons, 44.1 khz was chosen because in those days they were using frames of pro video machines to hold digital data when recording. With the NTSC rates it worked out you could fit nicely with little waste 44.1 khz worth onto the video tape. So in fact both 44.1 and 48 khz rates were dictated by video standards.

There were 245 lines each line holding 3 sample and at 60 hz that is 44,100. Video actually ran at 29.97 hz, and some early machines actually were more like 44,060 hz even though the standard was 44,100 hz. From wikipedia:

U-matic was also used for the storage of digital audio data. Most digital audio recordings from the 1980s were recorded on U-matic tape via a Sony PCM-1600, -1610, or -1630 PCM adaptor. These devices accepted stereo analogue audio, digitised it, and generated "pseudo video" from the bits, storing 48 bits—three 16-bit samples—as bright and dark regions along each scan line. (On a monitor the "video" looked like vibrating checkerboard patterns.) This could be recorded on a U-matic recorder. This was the first system used for mastering audio compact discs in the early 1980s. The famous compact disc 44.1 kHz sampling rate was based on a best-fit calculation for NTSC and PAL's video's horizontal line period and rate and U-matic's luminance bandwidth. On playback the PCM adapter converted the light and dark regions back to bits. Glass masters for audio CDs were made via laser from the PCM-1600's digital output to a photoresist- or dye-polymer-coated disc. This method was common until the mid-1990s.
Old 4 weeks ago
  #671
Gear Maniac
 

Quote:
Originally Posted by esldude View Post
The reason Lavry thinks 60 khz would do is related to ideas of James Johnston who is a well known researcher with formerly Bell Labs. There are some small number (1 or 2%) of young adults who can hear some high level tones to 23-25 khz. If one wishes to have a system capable of total fidelity to any adult human you need 25 khz response. In addition back in the days when some of the filtering was analog he wanted a wider transition band between highest frequency and cut-off so the filtering could be made that didn't color the sound. So it might be overkill for 99% of people, but to be totally blameless Mr. Johnston suggested 64 khz sample rates, flat response to 25 khz and the transition band could be 7 khz wide.

If you use 88 or 96 rates, since 64 khz never became a standard rate, going ahead with rolling off at 25 or 30 khz is fine and dandy.

The chance you can hear a difference even at 48 khz vs more with music is very small, and would be a very, very minor perceived difference. That is assuming good working digital gear. But of course you have short cuts like half band filtering, and various 'better' sounding filters that sound different via slightly altering upper frequency response or aliasing a bit. These might better be termed broken or bent digital systems.
See, I knew he said something about higher than 20 kHz hearing.
Just to be clear, the results of those studies were never replicated.
But the main issue for Lavry is flatness up to 20 kHz.
Just to be clear, this is flatness with digital filters, as you couldn't even approach it with analog filters, even at way higher desired final sample rates.

I see no problem with filter designs that add a deviation from flatness so that the deviation is maintained up to 20 kHz. This deviation is a fraction of what happens to the signal after it goes through a speaker, let alone a speaker in a room.
I ALSO think there's no problem with keeping a tighter tolerance in the rest of audio band and going with -3 dB @ 20 kHz, for anyone who has ever claimed to care about differences in sample rates.
Actually, I would probably go with the latter if I could choose a design.
But to say that the former is an example of broken filtering design is to disregard the audio chain as a whole, to me. Not sure that's what you were referring to, or rather to other types of designs, that are actually broken.
And you always aliase a bit. It has to get pretty higher than normal filter rejection values of today's filters for humans to hear it, though. Remember, that is the rejection of frequency content that is also already smaller in power than the content in the audio band, so you find it there, in its aliased form, at even lower power than the filter's rejection.
Old 4 weeks ago
  #672
Gear Maniac
 

Quote:
Originally Posted by Space1999 View Post

To the purely scientific crowd I have devised a new formula for this occasion, I call it the Rigidity Law:

The rigidity of your beliefs is directly proportional to the tenacity to which you cling to them and indirectly disproportional to your willingness to accept new ideas.


I will be back to follow the thread but this is my stop and I have to get off.

Pat
Wow! Did you come up with that all on your own? You must be pretty proud of yourself! Go ahead and pat yourself on the back. You deserve it.

I don't think any "purely scientific" crowd comes any close to forums like Gearslutz. They are smarter than that.
Anybody roaming these anti-chamber of hell threads has a genuine interest in music.
Some of us simply ALSO happen to have a solid scientific background.

When you will have invested as much time and resources into researching how to make a better binaural microphone for purist recordings, or a better speaker, as I did, you will be able to claim a higher ground position in the 'love for music' vs. 'cold science' scale.
Until then, all I can say is good luck not dropping out of college. Humanity wouldn't want to miss the fruits of your future research findings, given the rare amalgam of interest in science and willingness to disregard it that your brain has been gifted with.

Since I am ALSO somewhat of a scientific guy, I kinda know better myself than to drag conversations on.
I gave myself some time to jump in and share some other ways of looking at the science (just different ways of saying it. NOTHING NEW was said from me on this subject. It's all out there, for who wants to look).
Giving yourself a time limit is a good way to maintain mental sanity, when you're facing people of a certain mindset. I had fun, but I'm starting to get snappy, and that's a good indication of when the fun is about to be over.
Apparently I have an edge, and to recreate it accurately is one of the rare cases where you do need higher sample rates.
So... I'm no Dan Lavry, but I'm just as out (for now, for good?).

Peace
Old 4 weeks ago
  #673
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by FatB View Post
Im those people. I think it doesnt take much in terms of headphone or stereo system to hear how mp3 is bad. Mp4/AAC is a lot better at the same rate.
But you are probably right that most people listen to music with their phones and basic earbuds, or their car system, or their tv speaker, so yeah I agree that most people are not picky sound wise.
But have them listen to a good stereo system (nothing that breaks the bank) and everybody ears the difference.

And about the vinyl sound, again not the majory and lot of nostalgia but I believe most of them just prefer the sound. It’s like speakers, at a certain point its a matter of taste, of finding your sound. I listened to $30K speakers at show that I hated. There is no “better for everyone” speakers/headphones/media format. At least thats my experience, as a consumer.
Edit: And I agree with the loudness war argument.
Yeah, you're probably right. At some point it simply becomes (mostly) preference.
Old 4 weeks ago
  #674
Lives for gear
 
Bstapper's Avatar
 

Quote:
Originally Posted by IanBSC View Post
Perhaps you could clue the rest of us in on a particular recording or playback chain that will give us transparent sound from 16/44.1khz?
You may be missing the point. Higher sampling rates may sound more transparent to you *on your system* but to extrapolate from there to "higher sampling rates sound more transparent" is simply incorrect.

So don't let that impact the way you work, but be aware that it is perfectly within the realm of science and reality to manufacture a quality converter with expensive analog filter that sounds just as transparent at lower sampling rates. In fact, that is exactly what you are paying for when you purchase "high end" conversion.

Too many people experience a result on their system and make the dubious assumption that the experienced result applies to all.



Cheers,
Brock
Old 4 weeks ago
  #675
Lives for gear
 
norfolk martin's Avatar
 

Quote:
Originally Posted by robshrock View Post
I've been following this thread for a while... more interested at certain times, yawning at other times. But I can't walk away.

I do know that mostly what I hear negatively in a lot of music from the digital world is:

1) over-hyped EQ and bad EQ choices;
2) immature use of compression, and having no idea how to set attack and release times;
3) overuse of distortion, saturation and hype effects;
4) bad balances;
5) more bad compression techniques;
6) crappy songs;
7) even more bad use/over-use of compression.

Stuff that would sound bad at any sample rate, perfect waveform reconstruction or not.
.
Yup. And (to me) the horror is that much of this is done under the guise of making it sound "more analog."
Old 4 weeks ago
  #676
Gear Guru
Look vinyl playback inherently has a different characteristic based on physics. Digital does not, so kind of silly to argue what's better other than being subjective. Lossy formats are well named since they lose info through compression. That can kinda suck or really suck. For the "it doesn't matter because earbuds" crowd, all the classics were essentially made for AM mono car radios, and sounds fantastic in feature films or any playback setting. Mp3's do suck and mp4's are a bit better. CD quality is essentially uncompressed so much better and yes they came up with standards for commercial reasons, but weren't just pulled out of a hat. You had the best engineering minds on the planet working on developing the standard.....

Good luck hearing a real difference between the higher sample rates. As Bob O said the real benefit is filtering. 192 makes zero sense unless you want to run audio half speed which is why location recorders offer it......

Some really great info in this thread but that's my takeaway......
Old 4 weeks ago
  #677
Gear Guru
 
UnderTow's Avatar
Quote:
Originally Posted by DistortingJack View Post
It wasn't. 16 bit 44.1 kHz was chosen so that Sony could fit Beethoven's 9th Symphony on a single CD. I am not joking.
That's not actually true. Just some marketing spin:

http://www.turing-machines.com/pdf/beethoven.htm

"Everyday practice is less romantic than the pen of a public relations guru, as at that time, Philips’ subsidiary Polygram –one of the world's largest distributors of music– had set up a CD disc plant in Hanover, Germany that could produce large quantities CDs with, of course, a diameter of 115mm. Sony did not have such a facility yet. So if Sony had agreed on the 115mm disc, Philips would have had a significant competitive edge in the music market. Ohga was aware of that, did not like it, and something had to be done. It was not about Mrs. Ohga’s great passion for music, but the money and competition in the market of the two partners. "

Alistair
Old 4 weeks ago
  #678
Here for the gear
 

Ugh, nobody is interested in Yamaha's article that I previously posted? I think it is a good read and contains a bunch of information...

Then I post another article instead.


Benefits of Delta-Sigma Analog-to-Digital Conversion - National Instruments
http://www.ni.com/en-us/innovations/...onversion.html

Quote:
Digital Decimation Filtering

The bit stream from the delta-sigma modulator is output to a digital decimation filter that averages and downsamples, thus producing an n-bit sample at the desired sample rate, Fs. This process of averaging has the effect of lowpass filtering the signal in the frequency domain, which attenuates the quantization noise and removes aliases from the band of interest. This decimation filter is usually built for an extremely flat frequency response in the passband and no phase error, a sharp roll-off near the cutoff frequency (about 0.49 times the sample rate Fs) and excellent rejection in the stop band, making it very effective at antialiasing. A digital decimation filter is typically implemented as a Finite Impulse Response (FIR) filter, such as a comb filter, which is a cost-effective way of implementing decimation.
About how actually we hear the sound...
Temporal resolution of hearing probed by bandwidth restriction
https://www.tnt-audio.com/casse/temporal_resolution.pdf


Pitch | Cochlea
http://www.cochlea.eu/en/sound/psychoacoustics/pitch

Some audiophile attempt
MQA Time-domain Accuracy & Digital Audio Quality
https://www.soundonsound.com/techniq...io-quality?amp


Hi-Res Audio Tutorial
https://www.magicbus.biz/what-is-hi-res-audio-.html



Off the topic... These whitepapers (especially "time") will be a good stretch for "scientific" people. It's pure gold.

The Story of Iconoclast Cable
https://www.iconoclastcable.com/story/index.htm


So distracted, but same keyword...5-10 microseconds.
Old 4 weeks ago
  #679
Lives for gear
 
DistortingJack's Avatar
 

Quote:
Originally Posted by esldude View Post
Well there is some truth to that in regards to the physical size of the CD.

Among several reasons, 44.1 khz was chosen because in those days they were using frames of pro video machines to hold digital data when recording. With the NTSC rates it worked out you could fit nicely with little waste 44.1 khz worth onto the video tape. So in fact both 44.1 and 48 khz rates were dictated by video standards.

There were 245 lines each line holding 3 sample and at 60 hz that is 44,100. Video actually ran at 29.97 hz, and some early machines actually were more like 44,060 hz even though the standard was 44,100 hz. From wikipedia:

U-matic was also used for the storage of digital audio data. Most digital audio recordings from the 1980s were recorded on U-matic tape via a Sony PCM-1600, -1610, or -1630 PCM adaptor. These devices accepted stereo analogue audio, digitised it, and generated "pseudo video" from the bits, storing 48 bits—three 16-bit samples—as bright and dark regions along each scan line. (On a monitor the "video" looked like vibrating checkerboard patterns.) This could be recorded on a U-matic recorder. This was the first system used for mastering audio compact discs in the early 1980s. The famous compact disc 44.1 kHz sampling rate was based on a best-fit calculation for NTSC and PAL's video's horizontal line period and rate and U-matic's luminance bandwidth. On playback the PCM adapter converted the light and dark regions back to bits. Glass masters for audio CDs were made via laser from the PCM-1600's digital output to a photoresist- or dye-polymer-coated disc. This method was common until the mid-1990s.
Quote:
Originally Posted by UnderTow View Post
That's not actually true. Just some marketing spin:

http://www.turing-machines.com/pdf/beethoven.htm

"Everyday practice is less romantic than the pen of a public relations guru, as at that time, Philips’ subsidiary Polygram –one of the world's largest distributors of music– had set up a CD disc plant in Hanover, Germany that could produce large quantities CDs with, of course, a diameter of 115mm. Sony did not have such a facility yet. So if Sony had agreed on the 115mm disc, Philips would have had a significant competitive edge in the music market. Ohga was aware of that, did not like it, and something had to be done. It was not about Mrs. Ohga’s great passion for music, but the money and competition in the market of the two partners. "

Alistair
I stand corrected. Good times.
Old 4 weeks ago
  #680
Gear Guru
 
UnderTow's Avatar
Quote:
Originally Posted by bleepwalk View Post
, nobody is interested in Yamaha's article that I previously posted?
If you state the point you are making or supporting with the link then maybe people might respond but right now it isn't really clear what you are aiming at with the articles.

Quote:
So distracted, but same keyword...5-10 microseconds.
And the timing resolution of CD is measured in the pico second range. In other words, orders of magnitude beyond what we can perceive.

Alistair
Old 4 weeks ago
  #681
Lives for gear
 
bogosort's Avatar
Quote:
Originally Posted by johnnyc View Post
The sincs are weighted and shifted in time, yes?
The reconstruction of f(t) is given in equation (7), which is an infinite sum over samples in n, not time in t. Note that, though t appears in the sinc, it is a parameter and not a variable. In other words, to find f(0) we fix t = 0 and sum over all n; to find f(0.3), fix t = 0.3 and sum over all n; etc. There is no "infinite time" in the reconstruction.

Quote:
It works, but not 100% accurate. What is the resultant error?
It is 100% accurate as long as the sampling rate exceeds twice the signal's bandwidth. That's exactly what the proof tells us. If you disagree, go ahead and show the flaw (and be prepared to become famous!).

Quote:
You seem to be missing the premise of this thread, which is sample rate. What is the fourier transform of a finite aperiodic signal? How does the sample rate influence the accuracy of the system given a finite aperiodic signal? Will the Nyquist rate provide sufficient accuracy for a short aperiodic signal?
I was responding to your contention that the sampling theorem requires f(t) to be an infinite, periodic signal.

Quote:
Why does sin(x) require infinite points but sin(x) + sin(x/2) does not?
They both represent periodic functions with infinitesimal bandwidths. In the case of sin(x), its Fourier transform has two points; in the case of sin(x) + sin(x/2), its Fourier transform has four points. In other words, they are the same class of function, namely, those with infinite support in time and vanishing bandwidth. These types of functions cannot be perfectly sampled, but that doesn't matter since these types of functions do not exist in the real world.

Quote:
How can you recover a signal from a single sample? A square wave has infinite bandwidth, will one sample be sufficient?
Let f(t) be a dirac delta, then -- for a system with infinite bandwidth -- one sample suffices to characterize the signal. I'm not saying such signals exist, it was brought up only to illustrate the relationship between time and bandwidth. A square wave is an infinite-time signal, of the same class as sin(x) + sin(x/2), and cannot be perfectly sampled. Again, no biggie, since it doesn't physically exist.

Quote:
And how much is enough samples? What if it's a short event? Consider a single period of a signal. If we sample at 2.2x then there will be at most 3 non-zero samples, is that sufficient? What is the resultant error?
The sampling theorem is clear how many samples are sufficient: for a signal of bandwidth W, it is perfectly characterized with anything greater than 2W samples per second.
Old 4 weeks ago
  #682
Here for the gear
 

Quote:
Originally Posted by UnderTow View Post
If you state the point you are making or supporting with the link then maybe people might respond but right now it isn't really clear what you are aiming at with the articles.
Sorry, I'm too lazy.


Quote:
Originally Posted by UnderTow View Post
And the timing resolution of CD is measured in the pico second range. In other words, orders of magnitude beyond what we can perceive.

Alistair
I couldn't find what you mean. I guess it's just something combined wavelength of the laser, the distance between a spot in the groove, and RPM of the disk... tracking interval maybe? So I think it's not sampling rate nor data rate.
Old 4 weeks ago
  #683
Lives for gear
 

Quote:
Originally Posted by bleepwalk View Post
Ugh, nobody is interested in Yamaha's article that I previously posted? I think it is a good read and contains a bunch of information...

Then I post another article instead.
I think it's generally considered a more "productive" thing to post an idea or statement or question coming from your own person and then link to stuff to support that rather than just post some links and assume people will know why they should read them and then subsequently read them.

Multiply that by 20 if you're a user for 2.5 years yet these are your very first posts since you joined, posted in a thread that's 13 years old and was old news even back then...
Old 4 weeks ago
  #684
Lives for gear
 
esldude's Avatar
Quote:
Originally Posted by sax512 View Post
See, I knew he said something about higher than 20 kHz hearing.
Just to be clear, the results of those studies were never replicated.
But the main issue for Lavry is flatness up to 20 kHz.
Just to be clear, this is flatness with digital filters, as you couldn't even approach it with analog filters, even at way higher desired final sample rates.

I see no problem with filter designs that add a deviation from flatness so that the deviation is maintained up to 20 kHz. This deviation is a fraction of what happens to the signal after it goes through a speaker, let alone a speaker in a room.
I ALSO think there's no problem with keeping a tighter tolerance in the rest of audio band and going with -3 dB @ 20 kHz, for anyone who has ever claimed to care about differences in sample rates.
Actually, I would probably go with the latter if I could choose a design.
But to say that the former is an example of broken filtering design is to disregard the audio chain as a whole, to me. Not sure that's what you were referring to, or rather to other types of designs, that are actually broken.
And you always aliase a bit. It has to get pretty higher than normal filter rejection values of today's filters for humans to hear it, though. Remember, that is the rejection of frequency content that is also already smaller in power than the content in the audio band, so you find it there, in its aliased form, at even lower power than the filter's rejection.
Don't know which results you are referring to not being replicated. The ones I have in mind were. They were very simple. Tell us if you hear this tone. And people did. Now those tones were very high in level. Over 100 db SPL. And the sensation wasn't reported as clearly a tone just some sort of aural perception something was there. Again, in the context of masking and the levels in music probably of no real concern anytime.

Also for younger people -3 db at 20 khz will be audible. Even down to around 16 khz. Easy to have good filtering and be flatter than that.

Half band filters instead of being say 96 db down at the highest frequency (22,050 hz for 44.1 khz) are down 48 db. Which allows some higher level aliasing on the ADC and imaging in the DAC. Both together will get you down 96 db by 20 khz and 24.1 khz, but are something of a short cut. Sometimes apodizing filters will soften the level of the 10-20 khz band, and let a bit more of low level imaging and aliasing through. Probably too low to matter, but not strictly following good design for correct reconstruction. Broken designs would be those that maybe don't use a filter on the output at all. Yes such are out there in small numbers.
Old 4 weeks ago
  #685
Here for the gear
 

Quote:
Originally Posted by mattiasnyc View Post
I think it's generally considered a more "productive" thing to post an idea or statement or question coming from your own person and then link to stuff to support that rather than just post some links and assume people will know why they should read them and then subsequently read them.

Multiply that by 20 if you're a user for 2.5 years yet these are your very first posts since you joined, posted in a thread that's 13 years old and was old news even back then...

I don't assume, just asking, it's okay. No offense! TBH, discussion in English is a bit overkill for me.
I'm also not trying to convince but to be convinced. If there's a good thing to learn for me, that good. Sorry for the mess. I'll read the entire thread soon!



Last edited by bleepwalk; 4 weeks ago at 07:18 PM.. Reason: add sentence
Old 4 weeks ago
  #686
Lives for gear
 
esldude's Avatar
Quote:
Originally Posted by bleepwalk View Post
Ugh, nobody is interested in Yamaha's article that I previously posted? I think it is a good read and contains a bunch of information...

Then I post another article instead.


Benefits of Delta-Sigma Analog-to-Digital Conversion - National Instruments
http://www.ni.com/en-us/innovations/...onversion.html



About how actually we hear the sound...
Temporal resolution of hearing probed by bandwidth restriction
https://www.tnt-audio.com/casse/temporal_resolution.pdf


Pitch | Cochlea
http://www.cochlea.eu/en/sound/psychoacoustics/pitch

Some audiophile attempt
MQA Time-domain Accuracy & Digital Audio Quality
https://www.soundonsound.com/techniq...io-quality?amp


Hi-Res Audio Tutorial
https://www.magicbus.biz/what-is-hi-res-audio-.html



Off the topic... These whitepapers (especially "time") will be a good stretch for "scientific" people. It's pure gold.

The Story of Iconoclast Cable
https://www.iconoclastcable.com/story/index.htm


So distracted, but same keyword...5-10 microseconds.
You start with the awful Kunchur paper. It has some very basic mistakes. One is confusing JND and level differences that will allow perception. In essence his analog filters altered the 7 khz fundamental enough he was merely testing for a level difference. And it was heard just as one would expect. He wrongly concludes 5 useconds is the limit. Prior work going back to the 1950's if not earlier show it to be 10-12 microseconds. They didn't use his method. The other obvious mistake is thinking time resolution of digital is limited to the time between samples. It isn't. As already stated redbook is good into the picoseconds range.

The cable whitepaper is using sciency sounding stuff to baffle them with BS. So sad.

Last edited by esldude; 4 weeks ago at 07:46 PM..
Old 4 weeks ago
  #687
Gear Maniac
 

Quote:
Originally Posted by DistortingJack View Post
It wasn't. 16 bit 44.1 kHz was chosen so that Sony could fit Beethoven's 9th Symphony on a single CD. I am not joking.
You might not be joking, but you are wrong -- in case anyone needs an object lesson on how urban legends are perpetuated...

What *is* true about CDs that sounds a lot like an urban legend, is the 15mm diameter of the hole was based on a Dutch dubbeltje coin by Joop Sinjou, the head of Philips audio products development.
Old 4 weeks ago
  #688
Gear Addict
 

Quote:
Originally Posted by Cpl. Punishment View Post
You might not be joking, but you are wrong -- in case anyone needs an object lesson on how urban legends are perpetuated...

What *is* true about CDs that sounds a lot like an urban legend, is the 15mm diameter of the hole was based on a Dutch dubbeltje coin by Joop Sinjou, the head of Philips audio products development.
It was the story given of course but the real story was more about competition between Sony and Phillips who worked together to get the CD standard out.. The article posted previously was written by one of the engineers of the CD and Redbook standard tell the tale. Great read and not terribly long.

http://www.turing-machines.com/pdf/beethoven.htm
Old 4 weeks ago
  #689
Lives for gear
 

Quote:
Originally Posted by bogosort View Post
The reconstruction of f(t) is given in equation (7), which is an infinite sum over samples in n, not time in t. Note that, though t appears in the sinc, it is a parameter and not a variable. In other words, to find f(0) we fix t = 0 and sum over all n; to find f(0.3), fix t = 0.3 and sum over all n; etc. There is no "infinite time" in the reconstruction.
What is n? What is T? What is nT? What does sinc(t-nT) represent? Hmmm....it looks a lot like a time shifted sinc. Since there are infinite nT it is indeed an infinite sum in time. Note: nT doesn't need to be continuous, maybe that's the confusion.

Quote:
Originally Posted by bogosort View Post
It is 100% accurate as long as the sampling rate exceeds twice the signal's bandwidth. That's exactly what the proof tells us. If you disagree, go ahead and show the flaw (and be prepared to become famous!).
Lol, we were discussing a PRACTICAL system. So how about YOU provide a proof of how you can have 100% accuracy with finite time.

Quote:
Originally Posted by bogosort View Post
I was responding to your contention that the sampling theorem requires f(t) to be an infinite, periodic signal.
You are correct, in that a periodic signal is not a requirement, and that shouldn't have been linked to the theory. But the other questions posed regarding practical applications still remain.

Quote:
Originally Posted by bogosort View Post
Let f(t) be a dirac delta, then -- for a system with infinite bandwidth -- one sample suffices to characterize the signal. I'm not saying such signals exist, it was brought up only to illustrate the relationship between time and bandwidth. A square wave is an infinite-time signal, of the same class as sin(x) + sin(x/2), and cannot be perfectly sampled. Again, no biggie, since it doesn't physically exist.
You seem to mixing up many different things. Consider what happens in reconstruction if you only have 1 sample, what does the signal look like? How would you reconstruct a pulse with only 1 sample?

Quote:
Originally Posted by bogosort View Post
The sampling theorem is clear how many samples are sufficient: for a signal of bandwidth W, it is perfectly characterized with anything greater than 2W samples per second.
You are reciting the theorem without sufficiently pondering the questions. There are practical limitations that go beyond Signals and Systems 101. Whether you choose to explore and learn about it is up to you,
Old 4 weeks ago
  #690
when CD was introduced 14 bit digital-to-analogue converters were used. Mr Linn of Linn Systems couldn't hear the difference between a 100% analog signal patch using one of his expensive turntables and one with a ADDA convertor in the signal patch.
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump