I think we need to clear up definitions here-- sample rate conversion is not upsampling.

OK, here's part of the problem. When I use the word "upsampling", I mean to increase the sample rate of an audio signal by means of a sample rate conversion algorithm. And I think I actually stated that somewhere up there...

Oversampling is averaging. Oversampling reads the wave multiple times and averages those numbers to assure accuracy. Averaging means to take several readings of something and to chose one value that represents the 'mean', or average value.

Perfect. But again, oversampling has nothing to do with SR conversion and I can't see how I may have implied that?

Sample rate conversion is approximation-- it has nothing to do with averaging.

My original statement was "the number would contain repeats since there is no real way to interpolate the non-exsistent map data" which you said was not true because the SR algorithm averages data...

So then, let me restate my original point; there is NO REAL way to accurately interpolate non exsistent map data. As we see above, scaling completely fails to accurately account for any missing information.

You're right, SR conversion does not "average" data in the strict mathematical sense of the word ( average of a and b = (a + b)/2 ). "Approximation", as you suggest, is a much more accurate term. Furthermore, you're right in that there is no

*real* way of interpolating what was there before digitizing.

However,

**upward SR conversion algos DO NOT REPEAT SAMPLES, AND THEY DO INDEED TRY TO INTERPOLATE THE NON-EXISTING INFORMATION.** To do this, they look at the level and rate of change of signal amplitude

*across a range of samples* and that way interpolate the interim samples.

Let's look at a similar example to the one above but without using fractions to make it more easily understandable:

A 1 second sample train recorded at 4Hz sample rate looks like this

0 - 4 - 8 - 12

A 4Hz to 8Hz SR conversion algo will

**not** do this:

0-0-4-4-8-8-12-12

Instead it will look at the rate of change of level around two contiguous samples (the wave's second derivative in the given interval) and then assumes that delta-t is sufficiently small that this number must be the same between the samples that it is analysing (or somehow comes up with a better approximation). The algorithm then uses this information to come up with the value of the extra sample it needs to create:

0 - 2 - 4 - 6 - 8 - 10 - 12

... which in this particular example, since the wave is a straight line and its second derivative in this interval is = 0, just happens to be the average between samples, but this is certainly not what usually happens (Although I'm sure there are SR algos that just find the average between samples...)

If the two SR's are not multiples of each other, the process is still similar, except that every single sample will be interpolated by the computer.

most songs on the radio are recorded to Protools, mixed on an SSL or neve, and the mix is converted back to digital. After that, Bob Ludwig and all of the other mastering engineers convert back to analog to eq and compress the material then re-convert it back to digital.

True, but this is only done because putting your mix through a Neve or an SSL or a Manley compressor or some other cool piece of gear largely offsets the losses caused by to extra D-A-A/D conversions. Read my previous post on the matter.

Peace,

Al.