Login / Register
 

Tips & Techniques:Intersample peaks

It's actually intersample everything, but the reason most people care are the peaks. The 44100 (or whatever) sample dots in the digital system are conceptual points of infinitely small time. The actual/reproduced signal is everywhere else than the sample points. An intersample peak meter helps keep a check on the real peak level.

Perhaps the easiest way to think about intersample peaks is that sample point meters are too slow. A VU meter is way slower than an analogue peak level meter. Similarly, sample point meters are way slower than reconstructed meters. There's plenty of time for the waveform to wiggle above or below the levels indicated by the relatively infrequent sample points.


Here's the standard reference: http://www.cadenzarecording.com/pape...distortion.pdf
More reading in TC library: Tech library


Have some pics on my webpage made for discussions on this topic. Although it's not as eloquent as the links above, it may be a supplement.

Part of the intersample peak problem is that the sample dots usually don't happen to land on the very top of the waveform - it's statistically way more probably to land on some other point than the very peak.



These two pics show (near) identical sine waves with a frequency very close to 1/4'th the sample rate(around 11025Hz). At some stages the sample points will hit at the peak of the waveform. At other, points, they will hit 1/4 further into the phase of the wave, almost 3dB lower. Upper pic is the wave shown in sound forge, which have a "non-reconstructed" waveform display. It's a connect-the-dots approach, that doesn't correlate with what happens at the output of the reconstruction in the converter(DAC). The sample point level meter in sound forge will track the sample dots and show a level that oscillates by almost 3dB, even though the output waveform is rock steady. The lower picture is the same wave shown in izotope RX, a program that does a reconstruction prior to visualizing the wave, showing it more like it'll appear on the DAC output. It's obvious here that the reconstructed output level should be steady, even though the sample points are all over the place.





The other part of the problem with getting the overview inside the computer is that reconstructed waveform have a final value that is not determined on a connect-the-dot basis, rather it's the sum of all the samples that brings the big picture together. Each sample point is not only effecting the level at that instant, it influences the final waveform hundreds of samples afar on each side of the sample point. This can create some really wild intersample peaks.

The picture below shows what a single sample looks like:



The upper row is the single sample shown with a connect-the-dots approach, the lower is the same single sample points after it have been low pass filtered. Everything that is reconstructed on the output of the DAC is filtered in a similar manner. Notice that the sample point values are always true. The peak of that impulse will always be the final waveform level at that instant. What is not known, from looking at the sample points, is what level the intersample land will be.

Looking at the same impulse in a calculator (sin(x)/x) makes it clearer how it's possible to have a bunch of sample points that does not grow when added up, while the land inbetween can grow almost arbitarily large if fed the right signal.



The wiggles on both sides of the impulse all goes to zero at regular intervals(the sample clock), except for the central point where the value is one. The contribution to the other samplepoints is zero at all times and one in the middle, but only for the sample points. The rest of the waveform, the intersample area, is reconstructed by summing a very large number of such impules.

(more in Dan Lavrys sampling paper: http://lavryengineering.com/document...ing_Theory.pdf )


And this is how the individual impulses works together:



The picture above shows what happens when a bunch of such impulses are summed. The upper line shows two lone samples, one positive and one negative. These are the bulding blocks of sampling. The lower row is total silence followed by a short burst of maximum frequency. Notice that the first two sample points are the same as the upper row, a maximum and a minimum value sample. The highlighted red parts, and a bunch more like them from the other samples, all adds up to create the intersample peak in the row below. (there are more intersample peaks there at other points too). Notice that this also show how audio sampling is able to preserve phase information, as the peak is displaced to the left with respect to the sample point.

The same things happens when anything changes fast on the waveform. Intersample peaks are often a product of imposing arbitary fast changes on the signal. Hard limiting and/or clipping are typical ways to do that. That is also a typical way to create aliasing (as fast changes equals high frequency and frequencies above 22050Hz have nowhere to go but manifesting themselves as alias). I personally think it's good engineering practice to avoid aliasing and intersample peaks, especially considering translation to lesser DACs and further processing like MP3 coding. But each to their own..

If you want to see something really bad on the oversampled meter - try a sequence of maximum and minimum values that goes like this: "1010101101010" - notice that the alternating 1's and 0's suddenly change direction in the middle. The results depends on the filter being used in the reconstruction, with the intersample peak easily exceeding 10dB!


Noisy square looking signals are often ripe with intersample peaks. I tried feeding a digital signal to the chain to look what peak levels occured. The result was this:


The left area is RMS and sample point metering, while the right meter shows reconstructed values. Notice that RMS shows up as -3dB, sample peak meter shows -4.5dB and the reconstructed meter shows the real peak to almost reach 1dB above zero. Musical signals are never this bad, although it sometimes leans towards results like these with the most abused victims in the loudness wars.



Ceilings of .3 or .1 etc dBFS are arbitrary. What matters are the real/intersample/reconstructed peaks. Clean processing only produces fractions of dB's overshot, aggressive loudness treatments can make for several dB's, and worst case signals can in some situations give two digit numbers of dB's overshot! Sample points meters are like analogue peak level meters, they're too slow to catch the smallest and highest peaks.

I use the IS peak meter like the other metering/test/visualizing stuff - mostly to do technical tests, only occasionally throwing a glance at it while mastering. The interesting thing is the difference between IS peak and sample point peaks. Some processing options are more prone to creating IS peaks than others. By observing the meter while fooling around and testing, I can quickly find how the processing fares in regards to the IS peaks. Some things, like clipping, are guaranteed to create IS peaks - so I step very cautiously if that is going to be used. Other stuff, like harmonic boosting (ala HEDD) will not create much IS peaks at all. Some processing options falls in between those extremes. So to me - the most useful aspect of the IS peak meter is to avoid the "bad" processing options in the first place.

How much overload that can be tolerated is highly dependent on the playback system and the musical content. I don't use a cheap DAC to test how it can sound, as there is no "perfect cheap DAC" that will sound bad in the same way as every other cheap DAC. Instead, I let the IS peak meter tell me how much above zero things goes, and from there I try to make a guesstimate as to how that may interfere with the listening joy. If the peaks are very small and/or only occur at noisy parts (like snare drum), some distortion is tolerable IME. If the peak occurs in say a bassy tone, the overload may create an obvious splat of distortion in the playback system. It's guesswork, but the guesswork can at least be quantified.

If the final destination is CD only, IS peaks are less troublesome than if the audio is going to be transfered to other media. In these days, that means psychoacoustic coding. Just about any lossy coding will change the waveform and create a new set of sample points on the output of the decoder. If the signal is free from IS peaks and have about half a dB of headroom, coding and decoding at high rates can often be done without hitting the ceiling. As soon as the bitrate shrinks the peaks will change more, demanding more headroom. The typical case in point is the way loud CD's sitting right at the ceiling with a dB or two of IS peaks. This is guaranteed to create a lot of extra distortion when lossy coding is used. This is IMHO, a contradiction, as the masters are created to sound great on crap systems(no dynamics, little bass, etc), yet it will have distortion beyond the ME's control when being used in the typical crap system playback scenario - lossy coding.


Regards,

Andreas Nordenstam

Contributors: mw, Nordenstam
Created by Nordenstam, 12th October 2008 at 01:27 PM
Last edited by mw, 27th March 2012 at 03:05 PM
Last comment by vtone on 8th January 2009 at 02:11 AM
1 Comments, 18,051 Views



(1) Comments for: Intersample peaks Page Tools Search this Page
#1
8th January 2009
Old 8th January 2009
  #1
Lives for gear
 
vtone's Avatar
 
Joined: Jan 2003
Location: Endlessly On Tour
Comments: 778

vtone is offline
what happened to the missing images? this is a very important article!!!
Reply

Bookmarks
Page Tools
Search this Page


Posting Rules
You may not create new articles
You may edit articles
You may not protect articles

You may not post comments
You may not post attachments
You may not edit your comments

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

SEO by vBSEO ©2011, Crawlability, Inc.