Analysis tools - newbie question

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

morls

Well-known member
Joined
Jul 15, 2013
Messages
257
Location
Australia
Hi,
I've built a couple of Drip Pultecs, and have just got the Flux Pure Analyser software up and running. I'm very much a beginner at all this, and so was wondering if anyone could suggest some reading to learn about analysing audio circuits? In particular, how to interpret a Transfer Function test.

The signal generator outputs through my Merging Technologies HAPI interface, and the test input can be selected from the preamps in the HAPI. As a test I've grabbed a magnitude spectrum of both pultecs with no input signal and EQ bypassed, just to see what is happening. The main peak is at 50Hz, so I'm assuming this is the mains frequency? There are different peak amplitudes between the two units, and there's something happening at around 4k in Pultec 1. I'm not sure if this indicates anything is amiss?

Preamp input gain is 0dB for these grabs.

Cheers
Stephen
 

Attachments

  • Pultec 1 bypass.PNG
    Pultec 1 bypass.PNG
    157.5 KB
  • Pultec 2 bypass.PNG
    Pultec 2 bypass.PNG
    182.1 KB
  • Pultec 1 and 2 bypass.PNG
    Pultec 1 and 2 bypass.PNG
    193 KB
Th
The main peak is at 50Hz, so I'm assuming this is the mains frequency?
Indeed. Now it's impossible to estimate how good or bad it is, without a reference signal. You need to inject a known signal, preferrably at 0dBu, then we can extimate the impact of the noise .
There are different peak amplitudes between the two units,
This is not unusual in itself. No units are perfectly equal. One may have been put closer to an interference source. You should first orient the unit to find a minimum and at the same time findinf the source of interference.
and there's something happening at around 4k in Pultec 1.
Yes, that's a bit strange. Is it still there if you just short the input of your card?
Is it still sthere if you do a loopback (connecting the card's output to an input)?
 
Thanks for bringing the Flux software to the attention of the users here , I did try the site without any luck , not sure if its a problem at my end or not .
REW is another audio analyser thats well worth a look , probably the most popular one here for its distortion measurement capabillity . John Mulcahy, the creator of REW has also been really recpetive and helpfull implementing suggestions and tweaks from users , its also available free and can utilise any sound interface you happen to have .
 
The 50 hz component is due to magnetic induction from the mains transformer into chassis and other transformers , positioning of the transformers relative to each other plays a huge part , playing around with transformer orientation and mounting can get you a better result but thats very difficult after you've settled on a particular layout . Best bet is to keep the leads on the transformers long initially until you work out the best positions . You can also consider extra metal screening or even using one component to sheild another from hum . Transformers with their cores in parralel tend to fare worst ,where when positioned at 90 degrees will usually result in less magnetically induced noise . Best job is to house the PSU on a seperate chassis a few feet or more away , thats not always convient or possible though .
 
There are many possible sources of noise when trying to analyze a piece of gear like this. 50 Hz is clearly mains hum. But the source of it could be from the PT or because of a power cable in a wall or because your earth connection isn't very good or you don't have the source loaded or because your wires are not properly dressed or you have a ground loop with your audio interface or ....

The very first thing you must do is test your test rig. Specifically, run a "loopback test". Take the output of your audio interface and run it directly into the input with one short cable. Then generate a tone (use something like WavePad or whatever software to make a wav file with distortion free pure tones or bursts of tones at 100Hz, 1kHz and 10kHz at ~0.9 dBFS) and then run Audacity or something like it to play the tone out and record the input simultaneously (aka overdub). Adjust the input level control (if there is one) to match the level of your recording. Then put some tape on the level control (if there is one) so that you don't move it by accident. Adjust the output level so that the input is ~6-10 dB below the clipping point. Make your loopback recording. Use the edge of a tone burst to align the samples of the recording with the source wav file as perfectly as you can. Then subtract the input from the output and see what sort of difference spectrum you have. Adjust the level ever so slightly so that the two signals exactly cancel out and repeat that process, adjusting your tape, until you get a good null. Independently look at the noise floor in the gaps between tone bursts. Your measurements will only be as good as your test rig so you need to do this first to see how good your analysis tool chain actually is. Fortunately the USB audio interfaces are actually pretty good. Even a mediocre will have THD+N that is well below any circuit you're trying to measure.

Do that and post some results and maybe picks of the difference spectrum and noise floor and generally what you leaned. If you can get that far, we (I) can tell you how to actually make a meaningful measurement of a peice of gear like your pultec circuits. If you can't do this, then you can't really do a serious build because you won't know if your project actually works or not in which case .... oh well.
 
Thanks for all the replies, really appreciate the help!

I've done a loopback test of channel 1 (I have 8 in/out). The conditions were:

Recording
Source file: Signal generator in DAW, test tones (sine) @100Hz, 1kHz, 10kHz. Recorded at sample rate 176.4kHz via internal bus to file with level -0.9dBFS.

Recording chain: Connected line out ch1 (XLR) with line in ch1 (XLR)

Set input gain ch1 to record source @ -0.9dBFS (8dB input gain, line in)
Reduced output ch1 by 8dB
Recorded source.

Analysis
Playback of source to stereo internal bus L (reverse phase, unity gain channel fader. Output = -0.9 dBFS)
Playback of recording to stereo internal bus R (variable gain to achieve best null)
Internal bus L and R output to 1 mono channel
Output of mono channel used for analysis

I played around with the output gain of the recorded track to get the best null at 1kHz, with a reading of approx -40dBFS with source channel output at -0.9dBFS

Attached are 3 grabs of difference spectrum at 100Hz, 1kHz, 10kHz respectively.
 

Attachments

  • Loopback ch1 100Hz.PNG
    Loopback ch1 100Hz.PNG
    202.5 KB
  • Loopback ch1 1KHz.PNG
    Loopback ch1 1KHz.PNG
    196.5 KB
  • Loopback ch1 10KHz.PNG
    Loopback ch1 10KHz.PNG
    192.1 KB
Have run a test on the Pultecs.

Conditions:
Input signals - Nil; 100Hz, 1kHz and 10kHz @ +4dBu (1.227V measured at DAW output )
Pultec 1 and 2 set to bypass
DAW input - line in, 0dB gain
Measurement point - input channel of DAW, post-preamp.
 

Attachments

  • Pultec 1 no input 0dB gain line in.PNG
    Pultec 1 no input 0dB gain line in.PNG
    186.3 KB
  • Pultec 2 no input 0dB gain line in.PNG
    Pultec 2 no input 0dB gain line in.PNG
    208.8 KB
  • Pultec 1 bypass 100Hz @ +4dBu.PNG
    Pultec 1 bypass 100Hz @ +4dBu.PNG
    210.3 KB
  • Pultec 1 bypass 1kHz @ +4dBu.PNG
    Pultec 1 bypass 1kHz @ +4dBu.PNG
    195.2 KB
  • Pultec 1 bypass 10kHz @ +4dBu.PNG
    Pultec 1 bypass 10kHz @ +4dBu.PNG
    200.5 KB
  • Pultec 2 bypass 100Hz @ +4dBu.PNG
    Pultec 2 bypass 100Hz @ +4dBu.PNG
    249.6 KB
  • Pultec 2 bypass 1kHz @ +4dBu.PNG
    Pultec 2 bypass 1kHz @ +4dBu.PNG
    194.8 KB
  • Pultec 2 bypass 10kHz @ +4dBu.PNG
    Pultec 2 bypass 10kHz @ +4dBu.PNG
    199.5 KB
  • Pultec 1+2 bypass 100Hz @ +4dBu.PNG
    Pultec 1+2 bypass 100Hz @ +4dBu.PNG
    229.4 KB
  • Pultec 1+2 bypass 1kHz @ +4dBu.PNG
    Pultec 1+2 bypass 1kHz @ +4dBu.PNG
    218 KB
  • Pultec 1+2 bypass 10kHz @ +4dBu.PNG
    Pultec 1+2 bypass 10kHz @ +4dBu.PNG
    225.3 KB
Analysis
Playback of source to stereo internal bus L (reverse phase, unity gain channel fader. Output = -0.9 dBFS)
Playback of recording to stereo internal bus R (variable gain to achieve best null)
Internal bus L and R output to 1 mono channel
Output of mono channel used for analysis

I played around with the output gain of the recorded track to get the best null at 1kHz, with a reading of approx -40dBFS with source channel output at -0.9dBFS

Attached are 3 grabs of difference spectrum at 100Hz, 1kHz, 10kHz respectively.

You're not going to get a good null unless you align the samples. Meaning you have to actually zoom into the L (stimulus) and R (recording) track waveforms so that you can see individual samples and then timeshift the source track to align with the recorded track. But the noise floor is clearly more than good enough and I don't see any wild harmonics so maybe we'll just assume that your audio interface is good and skill null testing your rig further.

However, what is important is that you need to know exactly what sort of levels your interface is ultimately emitting. It's not at all clear to me what that is. When I said 0.9 dBFS, I just meant that when you generate the source wav file use 0.9 x 0 dBFS of the wav file. Meaning use 90% of the available headroom of the .wav. Then, when you played the .wav, you would adjust the output / input levels to something suitable. If you're just using a DAW generate the tones, that's ok (although not ideal for repeatability and doing math on things like null testing) but you need to find a good level and preferably you should try to figure out what 0 dBFS is. dBFS is a relative scale. If you put out a signal at 0.9 x 0 dBFS, the actual level going out of the interface depends on the DAC voltage range of your audio interface and whether or not the output is boosted after that which many interfaces do to match the sort of levels pro-gear uses. You need a way to measure dBu or dBV or if you just have a DMM that measures AC RMS voltage you could play a 100 Hz tone (DMMs usually won't pickup higher frequencies) and get a level that way.

Or you can just use dBFS and do simple comparative measurements. Meaning when you do your loopback test you could play your test tone at -10 dBFS and adjust the input level to be exactly -10 dBFS. Now if you insert your pultec or whatever circuit and run it again you can compare and see a change in level and what the noise floor of the device is (taking into account the change in level) any 50 Hz or whatever.
 
Have run a test on the Pultecs.

Conditions:
Input signals - Nil; 100Hz, 1kHz and 10kHz @ +4dBu (1.227V measured at DAW output )
Pultec 1 and 2 set to bypass
DAW input - line in, 0dB gain
Measurement point - input channel of DAW, post-preamp.

It is difficult to interpret these results because it's not clear that you're using the same conditions as your loopback. The point of the loopback is that it's the "control" in your experiment. So the most critical part (that I may not have emphasized properly) is that after you run the loopback test, you cannot change any levels of anything in your test rig. So whatever levels you choose in your DAW or if there's any level adjustment knob on your interface, after you get the loopback recording you must NOT change any of those levels. I put masking tape over the level control on my interface to make sure it's not changed by accident. So pick a level to test using the DUT but then, without touching any of the levels, do a loopback. Then switch the cables back and again, without touching any of the levels, run your DUT recording. Then post the side-by-side spectrums of the loopback and DUT. That would be much better than trying to say the level was this and the gain on the input was that and I measured so many volts in my DAW which is almost meaningless and most definitely meaningless unless you've carefully calibrated the DAW / audio interface.
 
Seems to me like most of the mains frequency harmonics are at about 72-74 dB below +4, resulting in about 65-68dB S/N. This is not a very good performance. I maintain that you have a problem wit the PSU.
 
Thanks for all the help so far, very much appreciated :)


You're not going to get a good null unless you align the samples. Meaning you have to actually zoom into the L (stimulus) and R (recording) track waveforms so that you can see individual samples and then timeshift the source track to align with the recorded track.

I'm sorry, I forgot to include this in my description. I did zoom in to sample level and align the audio before the null test.

However, what is important is that you need to know exactly what sort of levels your interface is ultimately emitting.

This is a professional setup, the DAW and interface are calibrated to AES3 standard, so 24dBu = 0dBFS. Measurement at the D/A outputs of a test signal of 1kHz sine @ 0dBFS gives a voltage of 12.22V, which is in line with AES/EBU standard.

This gives a nominal peak level of +12dBu (-12dBFS) and an alignment level of +4dBu (-20dBFS)

I'm pretty confident that the output levels I'm getting are in line with AES/EBU standard, and that I can trust the meters to accurately reflect the test signal levels.

Seems to me like most of the mains frequency harmonics are at about 72-74 dB below +4, resulting in about 65-68dB S/N. This is not a very good performance. I maintain that you have a problem wit the PSU.

That's interesting. These Pultecs are Version 2, and there are 2 large caps in the PSU (C1 and C2). The values for these are 10uf/450V, and instead of using axial electrolytics as per the silkscreen, I've used Epcos metallised polypropylene motor run caps - Epcos B32332I6106J80

When I said 0.9 dBFS, I just meant that when you generate the source wav file use 0.9 x 0 dBFS of the wav file. Meaning use 90% of the available headroom of the .wav.

Ah, thanks for clarifying that.
 
I'm sorry, I forgot to include this in my description. I did zoom in to sample level and align the audio before the null test.

Ok. Well that's a little strange because I would think you would get a much better null than what your plots showed. The 10 kHz one showed attenuation of -4 dB? That's weird. Whenever I find something weird like that, 9 times out of 10 it's worth chasing.

This is a professional setup, the DAW and interface are calibrated to AES3 standard, so 24dBu = 0dBFS. Measurement at the D/A outputs of a test signal of 1kHz sine @ 0dBFS gives a voltage of 12.22V, which is in line with AES/EBU standard.

This gives a nominal peak level of +12dBu (-12dBFS) and an alignment level of +4dBu (-20dBFS)

Ok but you really should run a loopback just to sanity check your rig at least once in a while. Your previous plots were null tests so maybe post one without. If you put out +4 dBu then I would expect to see a peak at -20 dBFS exactly. I don't care how professional the rig is, there are lots of places in that tool chain for things to go astray.

That's interesting. These Pultecs are Version 2, and there are 2 large caps in the PSU (C1 and C2). The values for these are 10uf/450V, and instead of using axial electrolytics as per the silkscreen, I've used Epcos metallised polypropylene motor run caps - Epcos B32332I6106J80

Looks like the 50Hz is at -88 dBu. That's not obviously bad AFAIK. Anything I do I would like to see a pretty flat noise floor. But I don't do tubes or linear supplies at these days but if I did a little mains that far down doesn't seem so bad to me.
 
Ok. Well that's a little strange because I would think you would get a much better null than what your plots showed. The 10 kHz one showed attenuation of -4 dB? That's weird. Whenever I find something weird like that, 9 times out of 10 it's worth chasing.

Ok but you really should run a loopback just to sanity check your rig at least once in a while. Your previous plots were null tests so maybe post one without. If you put out +4 dBu then I would expect to see a peak at -20 dBFS exactly. I don't care how professional the rig is, there are lots of places in that tool chain for things to go astray.

Thanks Bo, you've helped a lot here.

I've dug a bit deeper, and the latest firmware update for the interface has made a change to the gain settings of the AD module, which wasn't mentioned in the release notes but is in the lastest version of the user manual:

The AD converters have been designed in order to be able to benefit from the +3.1 dB SA-CD headroom offered by DSD, as per the scarlet book standard. Therefore a minimum of +6 dB gain is required onthe mic preamp or line input. This gain is applied in the digital section post AD just prior to the sigma delta 1-bit modulator. It is automatically applied as soon as the ADs are switched to DSD (64, 128, 256FS) and is visible in the preamp page. The gain can be adjusted between +6dB and +66dB in DSD mode. In Line mode, with a gain of +6dB, an input of +21dBu will generate a signal of +3dB SA-CD, in Mic mode, with the same gain, an input of +10dBu would generate a signal of +3dB SA-CD. To benefit also from this +6dB headroom in DSD on the DA outputs, we recommend our users to set the Output Attenuation to -6dB on each DA module configuration page.

The latest firmware has removed this default setting of +6dB minimum gain.

Although related specifically to DSD format, perhaps this applies more generally to the AD calibration whether working with PCM or DSD. This might explain the lower DAW input level (post DUT). Next chance I get I'll run the null test with the (old) default +6dB gain on DAW input and see how the levels match up.

Stephen
 
Although related specifically to DSD format, perhaps this applies more generally to the AD calibration whether working with PCM or DSD. This might explain the lower DAW input level (post DUT). Next chance I get I'll run the null test with the (old) default +6dB gain on DAW input and see how the levels match up.

A little difference in level isn't going to change the null test that much. It's probably something else like you're just not lining up the samples the way you think. This is why I like to use a wav file with bursts of tones so you clan line up an edge. Also, a lot of software is pretty wonky. I would not be surprised in the slightest if you found out that your DAW function generator didn't put out a steady tone. I would also not be surprised if your spectrum software is either goofy or some kind of averaging is turned on or the windowing is not right (although again that alone will not explain the bad null). There are so many things that can go wrong doing stuff like this with noise bleeding in from the oscilloscope ground, how cables are draped around and levels and goofy software you really have to go slow at least once and verify that every piece is doing what you think it should do. Believing that your test rig is correct when it's not is literally worse than not testing at all. That's why I like to use a wav file that is known to be perfect because it was explicitly constructed and then I use GNU octave to process the data separately. That just eliminates a lot of stuff. It also gives me high res plots like this:

1623551414984.png
 
That all makes a lot of sense. I'm sure that some of the settings in the analysis software need tweaking, I'll attach some grabs of the settings.

Is that plot generated by GNU octave?
 

Attachments

  • Spectrometer settings.PNG
    Spectrometer settings.PNG
    95.8 KB
  • Flux Global settings.PNG
    Flux Global settings.PNG
    35.3 KB
Is it not sufficient just to FFT the output signal?

Hi Ian,

A very good question!

I have little experience in testing. I agree with Bo that I need to have a clear understanding of my test rig (in this case DAW or Flux Analyser and AD/DA converters). I think I'm almost at that point now with the DAW. I think I need do the same using the Flux Analyser instead of the DAW, and then I'm done with this initial setup.

In order to do FFT I need to use the signal generator in Flux, which produces pink and white noise, and swept sines.

Maybe now is the time to ask "how do I FFT the output signal?"

Stephen
 
These Pultecs are Version 2, and there are 2 large caps in the PSU (C1 and C2). The values for these are 10uf/450V, and instead of using axial electrolytics as per the silkscreen, I've used Epcos metallised polypropylene motor run caps - Epcos B32332I6106J80
I don't think it makes any significant difference. What does make difference is how they are connected to "ground". Ground circulation currents must be thoroughly separated from sensitive nodes.
Analyze your lay-out. Consider every connection as a resistor.
 
Back
Top