The No.1 Website for Pro Audio
88.2 - 96kHz Vs 44.1-48kHz (a thread to end them all!!)
Old 17th February 2020
  #1111
Gear Addict
 
haysonics's Avatar
 

Quote:
Originally Posted by chrischoir View Post
Oversampling has been around since the 70s
Thanks but that is not what I am referring to.
Old 17th February 2020
  #1112
Lives for gear
 
chrischoir's Avatar
 

Quote:
Originally Posted by haysonics View Post
Thanks but that is not what I am referring to.
of course you weren't
Old 17th February 2020
  #1113
Lives for gear
Quote:
Originally Posted by tomwatson View Post
If you're creating more electronic music production based music or pop music with less live instruments, the majority of Kontakt libraries, drum hits and samples are 44.1, sometimes 48k, so it doesn't make much sense to go higher unless you're using more actual synths or recording real instruments and vocals but personally, I just do everything at 44.1k. If half the song is at 44.1 due to the source sounds (drum hits, Kontakt library patches), putting the sample rate up and then back down again could be worse than just leaving it at 44.1 due to down sampling and Aliasing. I do everything at 32 bit though.
I disagree. There is no penalty, worth being afraid of, going up in sample rate. For going down there is, as you typically may need dithering.

I routinely use 44.1 recordings in 88.2/96 projects. I haven't seen much issues, if any.

Who making electronic music doesn't use even a soft synth or external synths? The former often, but not always, sound better oversampled. Most of those sort that out internally though.
Old 17th February 2020
  #1114
Lives for gear
Quote:
Originally Posted by ardis View Post
Has anyone actually nulled the same file generated with different sample rates? I'd be curious........ Seems that would put the subject in perspective......
As that takes five minutes to do. Why don't you try?
Old 17th February 2020
  #1115
Gear Addict
 
haysonics's Avatar
 

Quote:
Originally Posted by chrischoir View Post
of course you weren't
I am asking when ADCs moved from artificial (interpolated) oversampling to actual/real oversampling.
Old 17th February 2020
  #1116
Quote:
Originally Posted by Mikael B View Post
I disagree. There is no penalty, worth being afraid of, going up in sample rate. For going down there is, as you typically may need dithering.

I routinely use 44.1 recordings in 88.2/96 projects. I haven't seen much issues, if any.

Who making electronic music doesn't use even a soft synth or external synths? The former often, but not always, sound better oversampled. Most of those sort that out internally though.
Actually, if the source is a 44.1 then go put the who project up to 96 and then drop it back down to 44.1 there will be a very slight decrease in quality due to aliasing. Also almost all electronic sampled drum hits are at 44.1. So there is a trade-off between the slight decrease in quality between the upsampled then downsampled content vs the native content recorded at a higher sample rate. Depending on how much sample based the content is used in the song (including sampled based instruments like Kontakt) which is usually at either 44.1k or 48k, it could actually sound slightly worse going up to 96k then back down to 44.1k or 48. Overall, it's not really worth double the hard drive space in my opinion in those situations as any difference is very minimal.

The other down side, when the DAW is open, if a client wants to reference a track off YouTube, send you an mp3 in an email or play anything else outside the DAW you'd need a 2nd interface and monitor controller just to play it due to the interface being locked at the sample rate of the DAW and then you're listening to the converters of the 2nd interface instead which may not be the same. Downloading, ripping, and importing all take time. Not such a big deal but if the client wants you to look up 10 different songs it can be a workflow killer. Just a few things to consider.

I was running higher sample rates for a while but I actually think those mixes were worse due to having too much upsampled 44.1 content in them. Also, I was matching to reference tracks at 44.1 but my project was at 96k which made it sound a little better. Dropping it down to 44.1, it got a little thinner and was further away from the reference track than I thought. Sometimes technically better spec wise doesn't always mean better end product.

At the end of the day, it's the end product that matters. I know higher sample rates are better but to me the difference is so small it doesn't matter that much. Bit rate however, that makes a much bigger difference to me than sample rate. I'd rather go 32bit at 44.1 than 24bit at 96k. It's more practical to work with.
Old 17th February 2020
  #1117
Gear Maniac
 

Quote:
Originally Posted by chrischoir View Post
Quote:
Originally Posted by haysonics View Post
Do you remember around which year ADC converters utilising a very high rate digital capture became available?
Oversampling has been around since the 70s
Funny story (to me, hope it is to someone else): I started reading about DSP in the early '80s (Hal Chamberlin's book in 1980, read Rabiner and Gold in 1983), but just a bit of tinkering (I found R&G to be a bit dense at the time, I wished it had some practical tips).

CD players came out a few years later, and someone wrote an article and said the 14-bit converters weren't as good as the 16-bit (which were still hard to make at the time). But I had read that the 14-bit converters were oversampled to get equivalent performance, while being easier to make. I asked the guy about this in some online forum, and he took offense, just saying I didn't know what I was talking about. I said I'd read it from an article by Ken Pohlmann (in Mix? He was a professor at U of Miami, subsequently wrote some digital audio books, still have them on my shelf). His reply was, hilariously, "Ken Pohlmann? I had lunch with him Tuesday!" (But in a way that indicated I still didn't know what I was talking about, and that Pohlmann had probably not said it.)

I deferred, because I wasn't much of an arguer back then (haha, you'd never guess now, huh), but that was a bit dissatisfying to me. At that point I decided to figure out how to upsample and downsample and why it all worked. Before the decade finished I served as a sample rate conversion legal expert in a high profile patent case for a well known company (we won), and went on to make some pretty cool DSP products. Sometimes these online arguments are good for something. Motivation at least.
Old 17th February 2020
  #1118
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by Mikael B View Post
I disagree. There is no penalty, worth being afraid of, going up in sample rate. For going down there is, as you typically may need dithering.
You seem to be mixing up sample rate and bit depth here.
They are independent things.
Also, going up in sample rate can mean different things.
One could use higher sample rates in the ADC, carrying all the ultrasonic content in the signal chain, or one could start with 44.1 and oversample (manually or automatically in the DAW), carrying only the 0-20kHz signal.
There is a penalty in carrying ultrasonic content in the audio chain. It is called IMD and it really doesn't become an issue (broken plug in design aside) until that ultrasonic content hits analog gear, amps and, mostly, speakers.
If we were able to transduce the digital signal to sound wave in a better way, it wouldn't be a problem.
But due to non ideal linearity of the things above mentioned sometimes IMD can indeed become audible, and that's one reason to not use higher sampling rate from the beginning.
Old 17th February 2020
  #1119
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by bogosort View Post
Everything we've been talking about happens on the ADC side; introducing the DAC is just needlessly muddying the waters.


You won't be able to establish that because it's impossible! Think about it: if what you proposed were true, then averaging a sequence would give us more resolution than the sequence itself, but we know for a fact that it gives us less. That's the entire point of averaging; it's a single-value summary of two or more things. We necessarily lose information when we take an average.

As for why we need 4x the sample rate to get an extra bit, my original explanation was in terms of bandwidths and noise power. Let's try an arithmetic approach.

Our goal with oversampling is to get n extra bits of resolution out of an m-bit quantizer. To do this, we sum a block of 4^n samples and divide the result by 2^n. This produces an (n + m)-bit sample.

Why do we get n extra bits when we do this? Note that 4^n = 2^(2n), and so if we sum 4^n worth of m-bit samples, we get a result with 2n+m bits of resolution:

4^n * 2^m = 2^(2n) * 2^m = 2^(2n + m)

Dividing that sum by 2^n,

2^(2n + m) / 2^n = 2^(2n + m) * 2^(-n) = 2^(2n + m - n) = 2^(n + m)

and we have an (n + m)-bit sample from a 4^n sequence of m-bit samples.

Since this may seem abstract, let's use concrete numbers. We have a 16-bit quantizer, so m = 16. We want 20-bit samples, so our resolution increase is n = 4. Thus, we need 4^4 = 256 samples (each at 16 bits) to get a single 20-bit sample. Plugging in the numbers,

2^(2*4 + 16) / 2^4 = 2^24 / 2^4 = 2^20

and we have a 20-bit value, as required.
Quote:
Originally Posted by earlevel View Post
First, I do not want to get in the middle of this What bogosort says is correct, but I also see where you're thought process is here, and give you a hint at why it doesn't work (then...I'll just...shrink...out..of here...)

OK, you're saying, "I have four levels, can't I toggle between two of them them at double the rate. That would average to being another level halfway between each—another bit, effectively—at the original rate."

That does actually give you, for a rate increase of N times, N bits of dynamic range. Doubling the rate doubles the possibilities.

Where it goes wrong in signal/noise (SNR). Any audio you encode with those original number of bits has uncorrelated error (noise) on top of the coherent signal. Just like when you sum two of the same signal in a mixer, you get a gain factor of 2 (call it +6 dB). But if you sum two uncorrelated signals of the same level you get a gain factor of the square root of 2 (+3 dB). So for signal, the factor is N, for noise sqrt(N). So signal/noise is N/sqrt(N), or simply sqrt(N), which means a 4X bit rate gives you only 2X effective bits.

I'm not going to go into the noise/dither on an oversampling ADC, just wanted to point out the basic idea, and why your thought process is on the right track, but won't give what you think.
Thanks guys!
And sorry for responding so late.

However, i think i have found the flaw in my reasoning.
All this time i was wrongly assuming that my 'pair of samples that are averaged' are actually samples from the same moment in time. But at the same time i expected them to have a statistically significant randomness, still coming from the same moment in time.

In reality the samples are from different times and the randomness contained in these samples is not correlated so strongly that you get an extra bit for every two samples. And this forces one to look at it from the signals perspective and not from the individual samples perspective.

At least, that's how i see it now.

Feel free to shoot at this tho!
Old 17th February 2020
  #1120
Lives for gear
 
bogosort's Avatar
Quote:
Originally Posted by haysonics View Post
I was unaware that the 44.1 kHz lowpass filter on the input of an ADC had been replaced with a lowpass filter set to a much higher rate.
That's one of the primary benefits of an oversampling ADC: replacing the extraordinarily complicated analog anti-alias filter (which nonetheless aliased) of a Nyquist-rate ADC with a dead-simple first-order RC filter.

Quote:
On the weekend I dug out the June 1998 issue of Keyboard magazine and on page 42 under the title Oversampling Converters they wrote "Oversampling is the process of increasing the effective sampling rate of a ADC converter by inserting interpolated (mathematically generated) samples between the existing 'real' samples. DAC converters do basically the same thing in reverse."
They seem to be describing upsampling, which happens strictly in the digital domain. A system with a Nyquist-rate ADC that upsamples internally is not an oversampling ADC. But as digital technology was quite new to most musicians in 1998, I think Keyboard magazine can be forgiven for their error.
Old 17th February 2020
  #1121
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by bogosort View Post
They seem to be describing upsampling, which happens strictly in the digital domain. A system with a Nyquist-rate ADC that upsamples internally is not an oversampling ADC. But as digital technology was quite new to most musicians in 1998, I think Keyboard magazine can be forgiven for their error.
You might help me out with some confusion in terms I have had for years, probably due to language.
So, the Mhz sampling stage in the ADC is upsampling?
And the adding 0s and digital low pass stage in a DAC is oversampling?
Or is it something else?
Some people seem to call oversampling the simple act of sampling at higher than 44.1 kHz, so that doesn't help..
Old 17th February 2020
  #1122
Lives for gear
 
bogosort's Avatar
Quote:
Originally Posted by sax512 View Post
You might help me out with some confusion in terms I have had for years, probably due to language.
So, the Mhz sampling stage in the ADC is upsampling?
And the adding 0s and digital low pass stage in a DAC is oversampling?
Or is it something else?
Some people seem to call oversampling the simple act of sampling at higher than 44.1 kHz, so that doesn't help..
You've got it backwards. Using MHz sampling (for a kHz signal) is oversampling -- you're literally sampling faster than you need to, hence oversampling.

Whatever the original sample rate, whether it was Nyquist-rate or oversampled, you can change the effective sample rate of a digital stream by upsampling or downsampling, i.e., inserting or removing samples in the stream. This happens after the ADC, so it has nothing to do with the sampling stage. For example, a compression plug-in might upsample the stream to handle the increased bandwidth of nonlinear processing, and then filter and downsample back to the target rate before returning the samples to the DAW.
Old 17th February 2020
  #1123
Quote:
Originally Posted by sax512 View Post
You might help me out with some confusion in terms I have had for years, probably due to language.

Quote:
Originally Posted by sax512 View Post
So, the Mhz sampling stage in the ADC is upsampling?
Yes, the AD samples more than theoretically required, in order to reduce the impact of the necesary analogue filter (and issues it produces: phase shift, weak stop-band, non-flat passband, noise, production tolerances).

By sampling audio in the MHZ range, even the simplest analogue filter (first order lowpass), using a really high cuttof freq, will sufficiently antialias the audible region, keeping the phase more or less linear over the audible bandwidth, and the curve more or less flat over this region.

Yet, at this point, we still have an incredibly high sample-rate, containing tons of aliases and noise, starting and growing above ~20kHz.

This is where a clean digital SRC does the heavy (and really sensitive) filtering work to get the signal back to economical rates (in the sense of the sampling theorem).

Same happens in the DA stage, where a digital SRC first resamples the audio heavily, to allow the analogue filter to operate in a maximally "comfortable" manner.

Quote:
Originally Posted by sax512 View Post
And the adding 0s and digital low pass stage in a DAC is oversampling?
No, this is resampling. Interpolation. Increasing the samplerate.This only happens in the DA case. No AD converter gains benefit from interpolating.

Quote:
Originally Posted by sax512 View Post
Some people seem to call oversampling the simple act of sampling at higher than 44.1 kHz, so that doesn't help..
Afaik, the standard definition of oversampling is purely about the AD/DA case.

Antialiasing a nonlinear process is arguably not oversampling, it's "sampling just right". Nonlinear processors that don't anti alias what they produce are just broken promises: "sampling wrong".
Old 17th February 2020
  #1124
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by bogosort View Post
You've got it backwards. Using MHz sampling (for a kHz signal) is oversampling -- you're literally sampling faster than you need to, hence oversampling.

Whatever the original sample rate, whether it was Nyquist-rate or oversampled, you can change the effective sample rate of a digital stream by upsampling or downsampling, i.e., inserting or removing samples in the stream. This happens after the ADC, so it has nothing to do with the sampling stage. For example, a compression plug-in might upsample the stream to handle the increased bandwidth of nonlinear processing, and then filter and downsample back to the target rate before returning the samples to the DAW.
Got it. In my notes from university we called oversampling simply sampling at MHz, upsampling at Nx in the DAC oversampling, and upsampling/downsampling at non related sample rates simply changing sample rate.
Thanks.
Old 17th February 2020
  #1125
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by FabienTDR View Post
Yes, the AD samples more than theoretically required, in order to reduce the impact of the necesary analogue filter (and issues it produces: phase shift, weak stop-band, non-flat passband, noise, production tolerances).

By sampling audio in the MHZ range, even the simplest analogue filter (first order lowpass), using a really high cuttof freq, will sufficiently antialias the audible region, keeping the phase more or less linear over the audible bandwidth, and the curve more or less flat over this region.

Yet, at this point, we still have an incredibly high sample-rate, containing tons of aliases and noise, starting and growing above ~20kHz.

This is where a clean digital SRC does the heavy (and really sensitive) filtering work to get the signal back to economical rates (in the sense of the sampling theorem).

Same happens in the DA stage, where a digital SRC first resamples the audio heavily, to allow the analogue filter to operate in a maximally "comfortable" manner.



No, this is resampling. Interpolation. Increasing the samplerate.This only happens in the DA case. No AD converter gains benefit from interpolating.



Afaik, the standard definition of oversampling is purely about the AD/DA case.

Antialiasing a nonlinear process is arguably not oversampling, it's "sampling just right". Nonlinear processors that don't anti alias what they produce are just broken promises: "sampling wrong".
Ok. Now I'm confused again..
I mean, I know how this stuff works but my confusion is just in terminology.
Maybe you and bogosort can sort it out and let me know.. You're both native English speaking guys, right?
Old 18th February 2020
  #1126
In the case of upsampling, you're stuffing the data sequence with extra zeroes (which causes spectral replicates) and then running an interpolation filter (which removes them). You end up with a higher sample rate, but no more information than you had before.

In the case of oversampling, those extra samples have actual data, so interpolation (to allow safe decimation) yields additional resolution.

David L. Rick
Old 18th February 2020
  #1127
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by David Rick View Post
In the case of upsampling, you're stuffing the data sequence with extra zeroes (which causes spectral replicates) and then running an interpolation filter (which removes them). You end up with a higher sample rate, but no more information than you had before.

In the case of oversampling, those extra samples have actual data, so interpolation (to allow safe decimation) yields additional resolution.

David L. Rick
Thanks, David. So you agree with the terminology that bogosort uses, correct?
Old 18th February 2020
  #1128
Gear Maniac
 

Quote:
Originally Posted by sax512 View Post
Ok. Now I'm confused again..
I mean, I know how this stuff works but my confusion is just in terminology.
Maybe you and bogosort can sort it out and let me know.. You're both native English speaking guys, right?
I think everyone's giving good info here, but as you might expect there are some gray areas. And we all pick the a word at one time or another that's not the best choice.

Oversampling: Think, "overdoing it". A rate that's more than you need for a given bandwidth requirement. A 720 HP Hellcat to drive pick up groceries. In contrast, a signal may be undersampled when the sample rate is too low (it could be that it's fine for most of the signal, but at times you get aliasing). And critical sampling is when you sample at the minimum needed (bandwidth just under half the sample rate—this could include some room for the filter).

And of course, one reason you might oversample is to do that bit-depth versus sample rate trick of getting more resolution.

Upsampling and downsampling fall under "sample rate conversion". It's typically a procedure step done to data that's already captured, unlike oversampling.

For audio, we could say we're normally critically sampled, because even at 96 or 192 kHz, we allow recording of signal up to nearly half that. And converters might do oversampling and rate conversion.

So, pretty simple, but as for the gray areas, I can give an example. A plugin might need temporary frequency headroom, but they usually need to return the same sample rate they were given. So that means upsampling, perhaps doing some non-linear process like "tube" saturation, then downsampling to the original. Here's where someone might call this an oversampled tube emulation. You could argue about whether that's correct—at least you can say that the tube process is receiving an oversampled signal. Either way, it's usually easier and its well-understood to just say the algorithm has oversampling. Even though it would be more correct to say the tube process runs at whatever times the sample rate.
Old 18th February 2020
  #1129
Gear Maniac
 

I read a very interesting article today on the site theaudiophileman.com
Journalist Paul rigby interviewed american mastering engineer Gavin Lurssen about the reissue on vinyl of 16 albums of George Harrison.
Gavin Lurssen is the guy who made the mastering of these new LPs. I didn't know him, but according to the journalist, he is a very talented and award winning engineer.
The mastering was done after a transfer from the original tapes to 24 bit / 192 kHz PCM.

In the interview Gavin Lurssen says something that I found very interesting. He says : " Many people in the audio field believe that 96 kHz is fine. Our feeling is that 192 kHz is where it really should start ".

I think that the opinion of a multi-award winning mastering engineer is much more credible than anything I coul say.


.
Old 18th February 2020
  #1130
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by Tom Barnaby View Post
I think that the opinion of a multi-award winning mastering engineer is much more credible than anything I coul say.


.
So you're going to trust someone that needs a justification for the boat load of monies he's been payed to do a transfer?
Of course he's going to claim that for them it only starts at the extreme of technical ability. Otherwise he wouldn't be able to ask the price in the first place.
But that has nothing to do with whether the advanced technical abilities actually add anything.
Old 18th February 2020
  #1131
Gear Maniac
 

Quote:
Originally Posted by monomer View Post
So you're going to trust someone that needs a justification for the boat load of monies he's been payed to do a transfer?
Of course he's going to claim that for them it only starts at the extreme of technical ability. Otherwise he wouldn't be able to ask the price in the first place.
But that has nothing to do with whether the advanced technical abilities actually add anything.
This is not a question of money. The tranfer was made by another engineer called Paul Hicks, who has worked in Abbey Road.
Gavin Lurssen doesn't need any justification. He is an award winning mastering engineer. When he charges fees he doesn't do it because he uses 192 kHz, but because he is known for beeing a talented professional.
After discussing a lot about the sampling theorem, I think that it may be useful to hear the opinion of a very good mastering engineer. This guy has a very successful career behind him and he is trained to listen and hear with a high degree of accuracy. At least, I think that his point of view has much more weight than mine. Anyway I do completely agree with him. 192 kHz is the way to go if we are looking for high fidelity in digital audio.

.
Old 18th February 2020
  #1132
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by Tom Barnaby View Post
Thanks.

This is a very good article with a lot details. I took the time to read it thoroughly and I must admit that I was wrong.
Please, forget all what I said about ultrasonics lately.
The audio band we have to consider actually extents from 20 Hz to 20 kHz.

.


Quote:
Originally Posted by Tom Barnaby View Post
192 kHz is the way to go if we are looking for high fidelity in digital audio.

.
Dude!
Old 18th February 2020
  #1133
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by earlevel View Post
I think everyone's giving good info here, but as you might expect there are some gray areas. And we all pick the a word at one time or another that's not the best choice.

Oversampling: Think, "overdoing it". A rate that's more than you need for a given bandwidth requirement. A 720 HP Hellcat to drive pick up groceries. In contrast, a signal may be undersampled when the sample rate is too low (it could be that it's fine for most of the signal, but at times you get aliasing). And critical sampling is when you sample at the minimum needed (bandwidth just under half the sample rate—this could include some room for the filter).

And of course, one reason you might oversample is to do that bit-depth versus sample rate trick of getting more resolution.

Upsampling and downsampling fall under "sample rate conversion". It's typically a procedure step done to data that's already captured, unlike oversampling.

For audio, we could say we're normally critically sampled, because even at 96 or 192 kHz, we allow recording of signal up to nearly half that. And converters might do oversampling and rate conversion.

So, pretty simple, but as for the gray areas, I can give an example. A plugin might need temporary frequency headroom, but they usually need to return the same sample rate they were given. So that means upsampling, perhaps doing some non-linear process like "tube" saturation, then downsampling to the original. Here's where someone might call this an oversampled tube emulation. You could argue about whether that's correct—at least you can say that the tube process is receiving an oversampled signal. Either way, it's usually easier and its well-understood to just say the algorithm has oversampling. Even though it would be more correct to say the tube process runs at whatever times the sample rate.
Thanks. I always used the term oversampling related to plugins, but I guess it should be upsampling plugins instead.

So, ADC oversamples, DAC upsamples. I'll stick to this terminology.
Old 18th February 2020
  #1134
Gear Maniac
 

Quote:
Originally Posted by sax512 View Post
Dude!
192 kHz is not about recording ultrasonics. We will have the opportunity to discuss this later.

Good luck if you want to achieve high fidelity with 48 kHz. It will be a tough job !

Old 18th February 2020
  #1135
Lives for gear
 
monomer's Avatar
 

Quote:
Originally Posted by Tom Barnaby View Post
This is not a question of money. The tranfer was made by another engineer called Paul Hicks, who has worked in Abbey Road.
Gavin Lurssen doesn't need any justification. He is an award winning mastering engineer. When he charges fees he doesn't do it because he uses 192 kHz, but because he is known for beeing a talented professional.
Ok. So if he would deliver a 44.1/16 master he would still be in business?
Old 18th February 2020
  #1136
Lives for gear
 
sax512's Avatar
 

Quote:
Originally Posted by Tom Barnaby View Post
192 kHz is not about recording ultrasonics. We will have the opportunity to discuss this later.
Can't wait..

Quote:
Originally Posted by Tom Barnaby View Post
Good luck if you want to achieve high fidelity with 48 kHz. It will be a tough job !

Plenty of high fidelity examples even at 44.1 kHz. 48 kHz is overkill

By the way, appeal to authority does not beat mathematical theorem. Think with your own head. And if you read it in something called 'the audiophile man', rule of thumb is you'll be 90% of times in a better shape if you did exactly the opposite of what they're telling you to do.
Old 18th February 2020
  #1137
Gear Maniac
 

Quote:
Originally Posted by monomer View Post
Ok. So if he would deliver a 44.1/16 master he would still be in business?
I think He would do this if some clients insisted to receive a 44.1/16 master, but I don't think that many people would be interested in a low quality master, at least for producing vinyls. Vinyl is interesting only if it has a higher resolution than CD.
Anyway, this guy is well known in the business and he can charge good fees because he is considered as a reliable professional.


.
Old 18th February 2020
  #1138
Gear Maniac
 

Quote:
Originally Posted by sax512 View Post
Can't wait..



Plenty of high fidelity examples even at 44.1 kHz. 48 kHz is overkill

By the way, appeal to authority does not beat mathematical theorem. Think with your own head. And if you read it in something called 'the audiophile man', rule of thumb is you'll be 90% of times in a better shape if you did exactly the opposite of what they're telling you to do.
Mathematics are useful only if they are a good representation of reality. It looks like that the sampling theorem is not a very good model for the recording of acoustic instruments. We will have the opportunity to discuss this later.

.
Old 18th February 2020
  #1139
Lives for gear
 

Quote:
Originally Posted by Tom Barnaby View Post
Quote:
Originally Posted by monomer View Post
So you're going to trust someone that needs a justification for the boat load of monies he's been payed to do a transfer?
Of course he's going to claim that for them it only starts at the extreme of technical ability. Otherwise he wouldn't be able to ask the price in the first place.
But that has nothing to do with whether the advanced technical abilities actually add anything.
This is not a question of money. The tranfer was made by another engineer called Paul Hicks, who has worked in Abbey Road.
Gavin Lurssen doesn't need any justification. He is an award winning mastering engineer. When he charges fees he doesn't do it because he uses 192 kHz, but because he is known for beeing a talented professional.
After discussing a lot about the sampling theorem, I think that it may be useful to hear the opinion of a very good mastering engineer. This guy has a very successful career behind him and he is trained to listen and hear with a high degree of accuracy. At least, I think that his point of view has much more weight than mine. Anyway I do completely agree with him. 192 kHz is the way to go if we are looking for high fidelity in digital audio.

.
Just a reminder....maybe to myself...you're not talking about what rate guys should be tracking at. You're pulling a topic related to mastering.

Just sayin'.

This thread routinely bounces between opinions for mastering ....with reply comments several posts on of "oh, I didn't mean do that rate for recording".
Old 18th February 2020
  #1140
Quote:
Originally Posted by Tom Barnaby View Post
Mathematics are useful only if they are a good representation of reality. It looks like that the sampling theorem is not a very good model for the recording of acoustic instruments.
Sampling (and thus the sampling theorem) is typically recording and reproducing voltage over time. Two well defined dimensions. This is obviously independent of the class/shape/nature of physical events we're trying to track.

I think this puts your idea into logically questionable territory.

As with addition and multiplication, the same rules apply to all things we're trying to represent with them. The laws of math don't suddenly change because a guitarist appears.

Last edited by FabienTDR; 18th February 2020 at 07:43 AM..
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump