RTA applications like TrueRTA also give a single 'spl' like noise figure in dBu for the displayed frequency range.
The overall SPL figure given in RTA is usually a few dB less (noisier) than the peak for a single dominant frequency.
Because there is always noise in the rest of the bandwidth which aggregates to a couple of dB.
For example mains noise : I will see peaks at fundamental and 2nd harmonic of mains freq.
Each may be around -90dBu on a quiet build.
The rest of the spectrum normally shows a slowly rising hi-freq.
The SPL noise floor would be shown as around -84dBu for 20Hz-20KHz - basically a few dBs down from the dominant peaks.
I need to check if restricting the display bandwidth alters that SPL noise floor figure.
I have a feeling it doesn't - it uses the audio interface sampling freq to calculate on, not the displayed.
Could be wrong - I'll check that.
When I'm measuring, I note the 'overall SPL' figure in dBu at some standard condition of gain in the device under test.
Unity gain throughout is one condition, and then I usually check at some 'sensible' gain structure suitable for the device.
Then I note the signal-applied condition and signal shorted condition and note the relevent info.
That gives me a pretty reasonable measure of things like psu noise levels, overall noise levels and harmonic distortion levels as well as some characterisation of spectral futz distortions.
I quote the SPL figure and the bandwidth because it is kind of a 'worst case' noise floor and usually I say something like 50/100Hz components all below -90dBu.
The accuracy of the measurements, of course is related to the resolution and performance of your tools.
In addition, it is limited to the 'verifiable calibration state' of your measurement chain.
ie. it's no good believing and claiming the figures which can't can't reasonably be justified by using independent means.
Commercial outfits which make claims on accuracy in critical fields spend a LOT of time continuously verifying measurement means.
For a 'homey' like me, I can measure an ac voltage confidently to a couple of mV on a cro. So I set up some tests to ensure what I think I am seeing on the RTA can be related back to the CRO measurements.
To within a few dBs on a dBu scale ranging from -120dBu to +20dBu I think is reasonable accuracy to expect.
All quite servicable, but to get better, I would need an instrumentation amp rig, some weighting filters and probably some notch filters.
And certainly a much more accurate multimeter and probably a better cro too.
The interesting thing with RTAs is they all show somewhat different interpretations of the data.
Som much depends on the freq-domain conversion method of 'windowing' and 'bucket' length and all that.
The couple I use can be reasonably related back to fundamentals by a reasonably competent person.
And that gives confidence in the reported results.
Rightmark on the other hand, and in my personal opinion, to which everyone is entitled, is more of a random number generator to me
I've never been able to establish any relationship from the reported results to my test equipment.
That could be just me, however
it reminds me a lot of the statistical modelling applications I used to use when calculating investment portfolio risk measures.
Not worth ****