[quote author="mikep"]Distortion is any deviation from perfect reproduction. It can be time, amplitude, phase, waveshape (harmonics), and other things we haven't yet quantified. saying that distortion is a transfer function makes it sound like you think you figured out all the terms and are just searching for the coefficients. might lead to better understanding of the low-order effects, but if you are trying to judge the overall performance this approach is fundamentally flawed.
there is one uncommon measurement that takes *everything* into account, even the stuff we don't yet know we are looking for. The phase cancellation test. you take your output signal and mix it with an inverted copy of the input signal. the level must be set precisely (tune by hand) and the summing amp should be "free of distortion". I think Mr Hafler wrote about this technique for testing power amps. it works, you can listen to the residual or look at it on a scope. no notch flter is needed. Why is this test not used more? Why am I not using it? :roll:
I am guilty of using a traditional distortion analyzer. But I think there is alot more info in there than the meter/scope is telling you. just listen to the residual. you ear can do a pretty good job of ignoring the fundamental bleed-through, the analyzer's own distortiion AND noise. I can usualy hear some low level, higher order products that are down in the noise that don't show up on a scope. you listen to the looped-back residual for a few seconds and then switch to the input, even when the meter is at its measurement limits you can often hear a change. For what its worth, the frequencies I use are 400Hz and 200Hz. this way the harmonics are in a range where they are easy to pick out. I guess there is a bit of a learning curve to this method but for anyone with musical background or lots of critical listening experience should try it.
mike p[/quote]
I too am a fan of null testing. IIRC it was popular in DIY audiophile circles decades ago. I have used it before to convince people of similarity between properly working products... People are often influenced by brand names and expectations.
The good thing about null testing is it is a kitchen sink kind of test revealing the entire difference. One shortcoming is it doesn't reveal which DUT is causing the error in product to product comparisons.
I find the common practice of THD analyzers to ignore phase shift a little misleading. The common error term in a NF amplifier is a 90' phase shifted (leading) version of the output (but referenced to the input). This sinusoidal error term at the fundamental frequency is completely ignored by THD test sets. That same error term for a non-sinusoidal stimulus will be a far from harmless phase shifted version of the output. Instead it represents the derivative, or a rate of change of the output waveform (i.e. for a triangle wave the error is a square wave, for a square wave the error is two spikes).
Years ago when I was still trying to design for audio-fools I settled on a two-tone IMD test. I used 19 and 20kHz 1:1 as test tones, under the premise that they were difficult but still within the valid audio band. Further the distortion product at 1kHz was smack in then middle of human hearing so clearly audible. I found this especially applicable to phono preamp design where the RIAA playback Eq would boost that IM product another 20 dB relative to the 19-20kHz stimulous unlike simple THD products where higher harmonics were simply rolled off. In other words THD testing understated nonlinearity while my variant two-tone IMD overstated nonlinearity by 20 dB.
I found this IMD test pretty revealing of how designs could perform with real world audio. I never had much use for the alphabet soup of TIM, SID, whatever. Slew related distortions just don't happen in adequately fast, properly band passed circuitry. YMMV.
JR
there is one uncommon measurement that takes *everything* into account, even the stuff we don't yet know we are looking for. The phase cancellation test. you take your output signal and mix it with an inverted copy of the input signal. the level must be set precisely (tune by hand) and the summing amp should be "free of distortion". I think Mr Hafler wrote about this technique for testing power amps. it works, you can listen to the residual or look at it on a scope. no notch flter is needed. Why is this test not used more? Why am I not using it? :roll:
I am guilty of using a traditional distortion analyzer. But I think there is alot more info in there than the meter/scope is telling you. just listen to the residual. you ear can do a pretty good job of ignoring the fundamental bleed-through, the analyzer's own distortiion AND noise. I can usualy hear some low level, higher order products that are down in the noise that don't show up on a scope. you listen to the looped-back residual for a few seconds and then switch to the input, even when the meter is at its measurement limits you can often hear a change. For what its worth, the frequencies I use are 400Hz and 200Hz. this way the harmonics are in a range where they are easy to pick out. I guess there is a bit of a learning curve to this method but for anyone with musical background or lots of critical listening experience should try it.
mike p[/quote]
I too am a fan of null testing. IIRC it was popular in DIY audiophile circles decades ago. I have used it before to convince people of similarity between properly working products... People are often influenced by brand names and expectations.
The good thing about null testing is it is a kitchen sink kind of test revealing the entire difference. One shortcoming is it doesn't reveal which DUT is causing the error in product to product comparisons.
I find the common practice of THD analyzers to ignore phase shift a little misleading. The common error term in a NF amplifier is a 90' phase shifted (leading) version of the output (but referenced to the input). This sinusoidal error term at the fundamental frequency is completely ignored by THD test sets. That same error term for a non-sinusoidal stimulus will be a far from harmless phase shifted version of the output. Instead it represents the derivative, or a rate of change of the output waveform (i.e. for a triangle wave the error is a square wave, for a square wave the error is two spikes).
Years ago when I was still trying to design for audio-fools I settled on a two-tone IMD test. I used 19 and 20kHz 1:1 as test tones, under the premise that they were difficult but still within the valid audio band. Further the distortion product at 1kHz was smack in then middle of human hearing so clearly audible. I found this especially applicable to phono preamp design where the RIAA playback Eq would boost that IM product another 20 dB relative to the 19-20kHz stimulous unlike simple THD products where higher harmonics were simply rolled off. In other words THD testing understated nonlinearity while my variant two-tone IMD overstated nonlinearity by 20 dB.
I found this IMD test pretty revealing of how designs could perform with real world audio. I never had much use for the alphabet soup of TIM, SID, whatever. Slew related distortions just don't happen in adequately fast, properly band passed circuitry. YMMV.
JR