JR's digital-analog null test

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

JohnRoberts

Well-known member
Staff member
GDIY Supporter
Moderator
Joined
Nov 30, 2006
Messages
29,548
Location
Hickory, MS
The main problem with trying to null a digital path input to it's output is digital path conversion delay that will compromise null depth at HF.

My concept, is to introduce a short digital delay in that digital path (for example just short of 0.5mSec for 1 kHz sine wave input). Then tweak the input sine wave such that the digital delay + conversion path delay equals one half wavelength. It should be easier to tweak the sine wave frequency for best null, then tweaking  the digital delay to be exactly 1/2 wavelength.

It should be trivial to null the two opposite polarity sinewaves in the analog domain. What doesn't null out is digital conversion distortion (assuming the two halfs of the sinewave are the same.).  Delaying almost a full cycle then performing the inversion to null (like normal null tests) outside the digital domain eliminates the unsymmetrical sine wave distortion as an error.

I don't have a digital rig capable of testing this. So I put this out there for any interested to play with.  Perhaps comparing a half cycle delay null to a full cycle delay null could reveal information about the input sinewave linearity.  8)

Have fun

JR

PS: Of course please point out any flaws in my theory. 
 
I think I had a similar idea myself.  But what does this method tell you about the signal that others won't? Sinewaves are good for testing amplitude, harmonics, phase. You could probably do the test with squarewaves as well.

But testing what a converter does to a complex signal isn't possible this way. And we have a tool for this, Audio Diffmaker. The results this software reveals track closely with what my ears are telling me.

Besides, a converter is always a compromise in the area of the reconstruction filter, at least at lower sampling rates. Today most of them appear to be optimized for maximum linarity in the frequency domain. It often happens on the chip by default and cannot be changed. But a less steep filter IMO, with less ringing,  sounds a lot more natural at least to my ears. Doesn't look as good on the manufacturers website though. But both my ears and Diffmaker tell the same tale...
 
  I don't know... Using a much better digital converter to do the measurements is other way, but of course you get to a point that there is no much better converter (at least not available for the measurement) In such case it's possible to run the conversion many times so you increase the generated artifacts to a point that is easily measurable by the same device.

  In an ideal world we would have a perfect delay for the analog path and then you go. I guess you BBD guru know that's not an option.

  I like the idea of having a way to compare the AD-DA path with a straight signal but one sine wave at a time might not be very representative of some of the artifacts of the system. IMD comes to mind first.

JS
 
living sounds said:
I think I had a similar idea myself.  But what does this method tell you about the signal that others won't?
This (null) test tells you how much the conversion output deviates from perfect. I am not aware of any other way to quantify that. Even this is limited to the quality of the analog null circuit, but this can be a very simple, very high performance analog circuit.
Sinewaves are good for testing amplitude, harmonics, phase. You could probably do the test with squarewaves as well.
Yes, any symmetrical, accurately repeating waveform.
But testing what a converter does to a complex signal isn't possible this way.
I am not aware of any dynamic (changing) phenomenon associated with A/D or D/A conversion so do not expect any artifacts to find. If the digital-analog null reveals any error product at all, it would be worthwhile to test at different amplitude levels (that involve less codes) which will ultimately be limited by noise in the conversion and/or null path. Of course testing at different frequencies could be performed with different delay times.
And we have a tool for this, Audio Diffmaker. The results this software reveals track closely with what my ears are telling me.
The audio diffmaker requires A/D conversion for all stems being compared, so it ASSumes perfect conversions. I am proposing a way to evaluate the accuracy of those conversions. 
Besides, a converter is always a compromise in the area of the reconstruction filter, at least at lower sampling rates. Today most of them appear to be optimized for maximum linarity in the frequency domain. It often happens on the chip by default and cannot be changed. But a less steep filter IMO, with less ringing,  sounds a lot more natural at least to my ears. Doesn't look as good on the manufacturers website though. But both my ears and Diffmaker tell the same tale...
I would much prefer having an objective method to evaluate conversion quality, that does not depend on another convertor.

JR

PS: Don't confuse apparent codec filter ringing with "Gibb's Phenomenon", the near perfect removal of higher overtones from square waves.
 
joaquins said:
  I don't know... Using a much better digital converter to do the measurements is other way, but of course you get to a point that there is no much better converter (at least not available for the measurement)
precisely
In such case it's possible to run the conversion many times so you increase the generated artifacts to a point that is easily measurable by the same device.
I don't know how practical this is... If you record the product to re-loop through, the quality of the storage medium will probably dominate. If you run through a number of near identical conversions in series, you can impute some average performance, as long as nonlinearity doesn't cancel out.
  In an ideal world we would have a perfect delay for the analog path and then you go. I guess you BBD guru know that's not an option.
but we can make an ideal lossless delay in the digital path, the basis for this approach.
  I like the idea of having a way to compare the AD-DA path with a straight signal but one sine wave at a time might not be very representative of some of the artifacts of the system. IMD comes to mind first.

JS
IMD is just another way to measure simple nonlinearity. With IMD the distortion product is a function of the two or more tones, instead of simple harmonic overtones. I found IMD very useful back in the 70s-80s to reveal inband distortion products that are not attenuated by path equalization  (for example RIAA roll-off or digital anti-alias/anti-image filters that would understate THD at HF.)

A suitable sine wave well inside the digital passband should reveal its nonlinearities to a precision null test. I wouldn't expect the CODEC innards to have slew rate issues, but any such trend could reveal at <10kHz.

My favorite two-tone IMD 19kHz:20kHz repeats at a roughly 1 mSec period, so could be used for this test with 1 mSec delay. Since those frequencies are near the LPF skirt, I might be tempted to use different IMD frequencies (say 15kHz:16kHz).

Any accurately repeating waveform can be used, In fact a short complex digital audio sample recording rigged to repeat exactly with a precise period could also be used.

JR

PS: Another oddball test, imagine connecting the output of a codec back to its input through a switch that selects between dry input or the loop through. It might be difficult to toggle between the two modes but in principle the loop through degradation would accumulate pretty quickly.  Perhaps a tone burst (or audio sample) with a path delay long enough to toggle between loading and loop through during quiet spaces bracketing the audio sample. A several mSec loop could be repeated a large number of times.  (IIRC doing this with old digital delay lines degraded audibly pretty quickly_. 
 
Well, Audio Diffmaker tests a DA-AD chain, while your method tests a AD - DA chain. Ideally the results of passing these either way would be the same. You can generate perfect (for all intents and purposes) waveforms in the digital domain, of course, so no need to use a pre-recorded (and converted) signal.

Strange effects are happening with the complex algorithms of sigma-delta converters, these might be dynamic or even unpredictable in nature. But I don't actually know.

As for the filters, the ringing and other artifacts could be avoided or at least mitigated at the price of an increase in latency and/or increased processing power. 
 
living sounds said:
Well, Audio Diffmaker tests a DA-AD chain, while your method tests a AD - DA chain. Ideally the results of passing these either way would be the same.
except diffmaker is using an already bandlimited (digital) signal for it's source ... my approach compares to an analog input signal so could be wide band, while we expect the digital path to be bandlimited. So that could reveal as error in a null test (unlikely from sine wave stimulus). 
You can generate perfect (for all intents and purposes) waveforms in the digital domain, of course, so no need to use a pre-recorded (and converted) signal.
yes
Strange effects are happening with the complex algorithms of sigma-delta converters, these might be dynamic or even unpredictable in nature. But I don't actually know.
I don't see how that could be dynamic (changing) unless they are doing tricks I've never heard of. 
As for the filters, the ringing and other artifacts could be avoided or at least mitigated at the price of an increase in latency and/or increased processing power.
The oversampled converters allow for much less aggressive LPF.  But I don't want to speculate about hypotheticals, I'd like objective measures of converter deviation from ideal. If diffmaker can do that what does it tell you about the "strange "effects in your delta sigma conversions?

JR
 
I've found diffmaker useful in a relative sense, which converter is closer to the original source. But it's rather useless when trying to quantify what the difference between converters actually is. This becomes a problem because certain inaccuracies are more intolerable than others.
 
JohnRoberts said:
I don't know how practical this is... If you record the product to re-loop through, the quality of the storage medium will probably dominate. If you run through a number of near identical conversions in series, you can impute some average performance, as long as nonlinearity doesn't cancel out.
  I've done it at some time, having multiple channels ADDA and looping back several of them and you add up pretty fast. It could also be automated, depending on your intents but let's say 10s track of a 1kHz sine wave is plenty of samples to do the math, you can let it running for a while and come back after a coffee. Are you saying EC DDR4 has too many errors to compromise the signal? Unless direct routing you will always have the data transfer corruption and even then you could have some corruption in the data bus from the ADC to the asic and to the DAC.
but we can make an ideal lossless delay in the digital path, the basis for this approach.IMD is just another way to measure simple nonlinearity. With IMD the distortion product is a function of the two or more tones, instead of simple harmonic overtones.
[...]
A suitable sine wave well inside the digital passband should reveal its nonlinearities to a precision null test. I wouldn't expect the CODEC innards to have slew rate issues, but any such trend could reveal at <10kHz.
I'm interested, I don't have the analog setup here to do such tests but I would if I could.
My favorite two-tone IMD 19kHz:20kHz repeats at a roughly 1 mSec period, so could be used for this test with 1 mSec delay. Since those frequencies are near the LPF skirt, I might be tempted to use different IMD frequencies (say 15kHz:16kHz).
I can wait 1ms to get my answer back, no problem with that. BTW, WTF is Sec?  :p
Any accurately repeating waveform can be used, In fact a short complex digital audio sample recording rigged to repeat exactly with a precise period could also be used.

JR
Seriously? Digital sample?

  I do like the idea of characterizing the converters in the analog domain, a lot. But note that with this method you characterize the combination of two converters at once, so you don't know which one to blame for any artifact you find. Maybe just a low THD sine wave generator, ADC, matlab, AD results. matlab sine wave, DAC, analog THD meter, DA results.

  Few missed points,
    You can't measure phase with your method, as you are correcting your timing to get good cancelation.
    The 'ideal' digital delay is discrete, so you might not be able to obtain a perfect cancelation, getting worse as freq increases.
    How accurate the analog generator generates consecutive cycles (sine waves is easy enough, two sine waves  for the IMD not so much I guess but this is accessory)
    I never quite got the PS part but nevermind.
JS
 
There was a presentation by ESS (converter manufacturer) that showed some of the artifacts of SD conversion, like spurious tones. Very hard to predict and signal dependant. Unfortunately the video vanished from the internet

You're right about the bandwith limitation, of course. This presents much less of a challenge for the converter.

Diffmaker uses negative interferance, so it's highly sensitive to phase. Naturally, AC coupled converters won't get good results. I've found some widely used high end converters have funny things going on that Diffmaker reveals. But so does a test of the phase response (I used the free HOLMimpulse, since there is no plugin to measure phase properly for some reason).

Diffmaker also revealed that my self-built NE5534 based input-stage for the PC4222EVM board indeed yielded better results than the differential op amp input stage that came with it. The latter had a somewhat muddy bass and slightly fizzy high end, and the difference file amplified these deviations.

I'll soon be able to test the new flagship RME converter ADI 2 Pro converter which allows for the user to select from a variety of filters. I'm expecting the combination of ADI 2 Pro DA and PCM4222EVM AD to yield the best Diffmaker results yet, but we'll see.
 
john12ax7 said:
I've found diffmaker useful in a relative sense, which converter is closer to the original source. But it's rather useless when trying to quantify what the difference between converters actually is. This becomes a problem because certain inaccuracies are more intolerable than others.
That is the classic problem with null testing, it only tells you two stems are different, not which one is different. I have thought about this and for consoles where you often have multiple identical circuit paths, you can compare one channel to an average of several channels which could help identify outlier channels but not much more.

For my proposed test I am comparing the digital path vs the dry input so any difference between them is all error.

JR
 
joaquins said:
  I've done it at some time, having multiple channels ADDA and looping back several of them and you add up pretty fast. It could also be automated, depending on your intents but let's say 10s track of a 1kHz sine wave is plenty of samples to do the math, you can let it running for a while and come back after a coffee. Are you saying EC DDR4 has too many errors to compromise the signal?
??? I googled that and it appears you are talking about a digital memory chip..

Digital data transfer can use error checking (like parity bits) with only modest overhead to insure no/minimal errors. 
Unless direct routing you will always have the data transfer corruption and even then you could have some corruption in the data bus from the ADC to the asic and to the DAC.
if there was significant digital data corruption that would not conveniently only occur to LSB but also randomly ding MSB. We would hear huge noise spikes from even one MSB error. I trust digital data to be relatively robust.
I'm interested, I don't have the analog setup here to do such tests but I would if I could.
any console with two inputs (and polarity switch ) can do the null, what is also needed is a digital A/D/A with delay capability,
I can wait 1ms to get my answer back, no problem with that. BTW, WTF is Sec?  :pSeriously? Digital sample?
Sec = second
  I do like the idea of characterizing the converters in the analog domain, a lot. But note that with this method you characterize the combination of two converters at once, so you don't know which one to blame for any artifact you find. Maybe just a low THD sine wave generator, ADC, matlab, AD results. matlab sine wave, DAC, analog THD meter, DA results.
The purity of the source is less critical than how constant it is from cycle to cycle as distortion will null out too.

I do not expect to find huge errors so characterizing the A/D and D/A together seems like a good start... If we find something to measure we can play with swapping in different A/D or different D/A to realize less deviation from perfect conversions.
  Few missed points,
    You can't measure phase with your method, as you are correcting your timing to get good cancelation.
    The 'ideal' digital delay is discrete, so you might not be able to obtain a perfect cancelation, getting worse as freq increases.
I expect it will be easier to tweak the sine wave frequency than delay resolution but one or the other needs to be tweaked for deepest null.

    How accurate the analog generator generates consecutive cycles (sine waves is easy enough, two sine waves  for the IMD not so much I guess but this is accessory)
It seems two tones could be as repeatable as one, it is the interference that repeats at the slower rate. (I made my own 19:20 kHz IMD test rig back in the day by modifying an old SMPTE IMD rig for higher frequency.)
    I never quite got the PS part but nevermind.
JS
OK

JR
 
JohnRoberts said:
??? I googled that and it appears you are talking about a digital memory chip..
  You mentioned the storage media as a limiting factor, I didn't got what you meant, I wouldn't expect much error there.
Digital data transfer can use error checking (like parity bits) with only modest overhead to insure no/minimal errors.  if there was significant digital data corruption that would not conveniently only occur to LSB but also randomly ding MSB. We would hear huge noise spikes from even one MSB error. I trust digital data to be relatively robust.
USB protocol used in soundcards doesn't have error correction, so unless treated in software it can happen.

any console with two inputs (and polarity switch ) can do the null, what is also needed is a digital A/D/A with delay capability,
I don't even have that but I don't know how deep null a console can do between two channels, feeding the same signal to both.
Sec = second
SI says it's 's'... It call my attention coming from you.
The purity of the source is less critical than how constant it is from cycle to cycle as distortion will null out too.

I do not expect to find huge errors so characterizing the A/D and D/A together seems like a good start... If we find something to measure we can play with swapping in different A/D or different D/A to realize less deviation from perfect conversions.
I agree with that, I was just clarifying
I expect it will be easier to tweak the sine wave frequency than delay resolution but one or the other needs to be tweaked for deepest null.
It seems two tones could be as repeatable as one, it is the interference that repeats at the slower rate. (I made my own 19:20 kHz IMD test rig back in the day by modifying an old SMPTE IMD rig for higher frequency.) OK

JR
  The jitter of the oscillator is what itches me, measuring this way you are trying to find missmatches orders of magnitude lower than the original signal. So you need an oscillator that you can control and will lock to a frequency where the delay matches the period. Is not imposible but requires a PLL highly stable and locked to a delayed signal. Next week I have a control theory test so maybe I could do the math as training...

JS
PS: can we measure the null signal with a digital instrument or do we absolutely need a needly thing swinging around?
 
joaquins said:
  You mentioned the storage media as a limiting factor, I didn't got what you meant, I wouldn't expect much error there.
for null depth due to limited time increment resolution.. might be easier to tweak sine wave frequency once delay time is close.
USB protocol used in soundcards doesn't have error correction, so unless treated in software it can happen.
I suspect the big memory chips use error checking and error correction inside already, after they first map out bad blocks.
I don't even have that but I don't know how deep null a console can do between two channels, feeding the same signal to both.SI says it's 's'... It call my attention coming from you.I agree with that, I was just clarifying  The jitter of the oscillator is what itches me, measuring this way you are trying to find missmatches orders of magnitude lower than the original signal. So you need an oscillator that you can control and will lock to a frequency where the delay matches the period. Is not imposible but requires a PLL highly stable and locked to a delayed signal. Next week I have a control theory test so maybe I could do the math as training...
I guess it depends on the sine wave oscillator,,, In my experience using  real RC integrator SVF stages for the frequency the pitch is relatively stable. Amplitude jitter may be another issue depending on loop time constants... Slow AGC loops take a long time to settle.  OTOH I made one sine wave oscillator using OTAs for the frequency adjustment so that would change pitch while looking at it.  ;D ;D

A digitally generated sine wave from a look-up table will be as stable as the digital clock (for better and worse). 
JS
PS: can we measure the null signal with a digital instrument or do we absolutely need a needly thing swinging around?
The error will just be a voltage that can be measured any way you choose. I would listen to it too...

JR
 
Back
Top