Preamp difference : if it's not the frequency, not the slew rate, and not the harmonics, what is it ?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
This might be an embarrassing personal question, but how hot do you folks record things, in real life? What I've always assumed, and everything I can quickly find (here, here, or here) suggests -18 or -20dBFS, with peaks no louder than -10dBFS.

So we'd have at least 10dB - a factor of 3, voltage-wise - before the behaviour at clipping, or slew-induced distortion, should come in to play. Sure, it's a concern in a well-engineered product, but is it in real life?

Or is all that headroom nonsense like trying to do heavy metal in Dobly?
 
This might be an embarrassing personal question, but how hot do you folks record things, in real life? What I've always assumed, and everything I can quickly find (here, here, or here) suggests -18 or -20dBFS, with peaks no louder than -10dBFS.

So we'd have at least 10dB - a factor of 3, voltage-wise - before the behaviour at clipping, or slew-induced distortion, should come in to play. Sure, it's a concern in a well-engineered product, but is it in real life?

Or is all that headroom nonsense like trying to do heavy metal in Dobly?
Dobly? (sorry).
=====
Using -20dBFS for nominal mix 0dB, is good practice and roughly equivalent to 0VU with old school analog consoles.

All bets are off if audio paths are allowed to saturate (analog or digital). Most modern A/D convertors that use input oversampling should be all but impossible to slew limit.

JR
 
There’s an interview with Rupert Neve worth reading - Geoff Emerick from Abbey Road would disagree as would Rupert. Neve design consoles were designed with 75KHz and above bandwidth for a very good reason - the effects of distortion in the octave above 20KHz (up to 40KHz) and even the next octave up can have detrimental downward effects due to harmonics on the perceivable audio spectrum. There are 3 parts to the interview - access parts 2 & 3 using the right arrow tabs on the interview page. Part 1 deals with what I mention above.
Interview with Rupert Neve:
https://www.audiotechnology.com/features/interview/rupert-neve-interview-part-1
Oh my. An interview from 1998 and people talking about skin effect in the context of audio ... that really bring back memories ... mainly of vintage snake oil ...

I lifted this from Wikipedia because I'm too lazy to put effort into debunking weird beliefs (and RN had a few of those) but, even if we were to accept that recording ultrasonics somehow improves the fidelity of AF replay, then how is skin effect relevant, even to ultrasonics?

Frequency (Hz)R (Ω/km)L (mH/km)
1Hz172.240.6129
1kHz172.280.6125
10kHz172.70.6099
100kHz191.630.5807
1MHz463.590.5062
2MHz643.140.4862
5MHz999.410.4675

Did you ever mature intermodulation distorsion??
yes, of course! 😎
 
Dobly? (sorry).
=====
Using -20dBFS for nominal mix 0dB, is good practice and roughly equivalent to 0VU with old school analog consoles.

All bets are off if audio paths are allowed to saturate (analog or digital). Most modern A/D convertors that use input oversampling should be all but impossible to slew limit.

JR
Dobly is when you turn everything up to 11 ;)
 
Dobly is when you turn everything up to 11 ;)
"You don't do heavy metal in Dobly". 🤔

[TMI] when I was advising Analog Engineering designers about the Peavey VMP tube mic preamp feature set. I suggested making the gain knob go all the way up to 11, as a nod to Spinal Tap (and they did). [/TMI]

JR
 
isn't "attack" just another term for slew rate which is effectively limited by humans' 20kHz-ish upper frequency hearing range? I understand the "I just hit a snare drum" aspect but am I wrong to say that even if the rise time did exceed the 20kHz limit, we humans wouldn't be able to perceive it?

Oh my. An interview from 1998 and people talking about skin effect in the context of audio ... that really bring back memories ... mainly of vintage snake oil ...

I lifted this from Wikipedia because I'm too lazy to put effort into debunking weird beliefs (and RN had a few of those) but, even if we were to accept that recording ultrasonics somehow improves the fidelity of AF replay, then how is skin effect relevant, even to ultrasonics?

Frequency (Hz)R (Ω/km)L (mH/km)
1Hz172.240.6129
1kHz172.280.6125
10kHz172.70.6099
100kHz191.630.5807
1MHz463.590.5062
2MHz643.140.4862
5MHz999.410.4675
That interview was done before Google and Wikipedia btw.
RN may have had had some weird ideas but certainly made an impact in designing some of the best consoles and preamps the world has seen, not only Neve but Amek as well and Focusrite which he founded, which none of us here would come remotely close to achieving. The 2254 compressor another example.
Skin effect IIRC only really affects conductors over 18mm dia - has little to do with affecting audio frequencies in small conductors - mainly becomes relevant in power transmission lines.
I think the relevance of ultrasonic capability in recording consoles and recording devices is maybe more to reduce unwanted artifacts/IM distortion etc. in the higher spectrum from finding their way back down into the audio spectrum and to our ears.
Personally I think ultrasonic perception is best left to bats and hearing aids.

Abstract from another research paper Martin L Lenhardt et.al. Biomedical Engineering Program Medical College of Virginia
“Bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects. When speech signals were modulated into the ultrasonic range, listening to words resulted in the clear perception of the speech stimuli and not a sense of high-frequency vibration. These data suggest that ultrasonic bone conduction hearing has potential as an alternative communication channel in the rehabilitation of hearing disorders.”
 
In a studio system we often "cascade" multiple circuits/gizmos in a series path. As each device adds it's own HF roll-off we can find a rather poor overall system frequency response.

The same thing happens at the LF end of the spectrum as well.

I guess it depends on what any person considers as "high fidelity". Flat from 20 Hz to 20 kHz? OK...is .01 dB down at the extremes acceptable? .1 dB down? 1 dB down? 3 dB down....etc etc.

Bri
 
Just a fun experiment. I have used this phase invert null method to line up compressors and EQs to be used on stereo mixes, channels on recording consoles etc.
At one stage I wanted to compare different DAW’s audio engines - in the test I compared Cubase, ProTools, Digital Performer and LogicPro using a commercial track straight from CD imported into each DAW and bounced out using the standard export function, then all imported into the same DAW to do the comparison and found that Cubase and ProTools import/exports had an absolute null (on phase inversion of one track), so they were identical - Logic and Digital Performer were way different to each other and to Cubase and ProTools.
I had the same experience regarding DAWs. A client wanted to print mixes in a peculiar way and I wanted to demonstrate for him which DAW and routings yielded more “correct” exports/bounces.

I’ve always found a few DAWs to induce random playback “irregularities.” I trust Avid, Steinberg, and Reaper. The rest are fun tools, but questionable audio processors.
 
Geoff and Rupert both agree as I do, we three have had this discussion together many times. Things that effect the tone:

The mics relationship with the input stage, mostly the transformer input, which is why when you put in a pad, the tone changes.

The first stage of any opamp is where all the gain is. When I did the Tonelux opamp, I designed it with low open loop gain. It maxed out at 75dB. This is important as when you get closer to unity gain, like with a kick or snare, the opamp is actually at full gain, and all of the gain is being fed back into the inverting input, making it unity. Not the most preferred method. All of these older amps didn’t have the gain that amps do now. In many cases, a 741 will sound better than a lot of newer amps.

As far as the extended bandwidth, I remember an evening with Rupert at the Salt Lick in TX, we sat outside and talked in depth about bandwidth. The result of the conversation was that if the roll off is well out of the listening band, the higher frequencies are more in phase with each other as the frequencies go up, which is where a lot of the snap comes from. This is also why lower resolution sample rates don’t always sound as good as higher ones. The out of band frequencies we obviously don’t hear, but they are also just as in phase with the ones we can hear.

One additional thing, with the mishmash of preamps and eqs, that can also split up a good mix because you lose some coherency that was there when all the EQ’s and pres were the same. The phase shifts in these identical units do tend to glue things together better than 2 different sets of things…
 
One simple, fun and not so scientific test would be to find a piece of music with say plenty of hard hitting drums and guitars - plenty of “punch”. Import that into your DAW. Route it out from the DAW and route it back direct in via just the audio interface I/O and record that, then using the same interface output, pass the original (not the re-recorded) music through the preamps under test and back in to the DAW via exactly the same input on the interface. If the preamps as I assume are mono, you’ll need to do a mono export of the track to start and use a mono signal routed through the preamps. Then since the same signal has been used with the same interface I/O the new recorded pieces should all line up timewise fairly closely (allowing that there will be slight delay differences with each pre) with each other and the original re-recorded DAW direct signal - you may need to slide them into position to line up exactly with the first re-recorded track by observing the expanded waveforms. Then you can phase invert each preamp’s recording in turn added to the non phase flipped original re-record and listen to the difference between what has passed through the preamps plus the interface I/O and what has only passed through the original interface I/O - whatever you hear if there is no absolute null is the characteristic difference of the preamp compared to no preamp - the “sound” of the pre. You will need to adjust level balance of each preamp’s recording to come as close to null as possible. What you hear will be the frequencies that deviate from the original signal.
Then you can play the recordings of two preamps at a time with one phase inverted and you will hear the difference between the preamps.
This is only an audible test with no other result than hearing how different each preamp is to another and also to the original I/O path which is also part of each preamp’s recording - this is then eliminated from the equation as they all share the same identical I/O sonic characteristics. You can also do this with a sweep, impulse or white noise signal. You can also record the sum differential of a pair of tracks and have a look at the waveform expanded. Or put the recordings of each plus the differential recordings through analyzer software and see where the differences lie.

Just a fun experiment. I have used this phase invert null method to line up compressors and EQs to be used on stereo mixes, channels on recording consoles etc.
At one stage I wanted to compare different DAW’s audio engines - in the test I compared Cubase, ProTools, Digital Performer and LogicPro using a commercial track straight from CD imported into each DAW and bounced out using the standard export function, then all imported into the same DAW to do the comparison and found that Cubase and ProTools import/exports had an absolute null (on phase inversion of one track), so they were identical - Logic and Digital Performer were way different to each other and to Cubase and ProTools.
Very true, any residual is distortion, but of what? The A-D needs to have far lower than you expect from the DUT, or it’s the DUT.
 
1. With modern electronics a pseudo random noise test signal could be produced and then subtracted from the amplified signal. The remainder would be added noise.
2. Use a DSP generated noise with notches for various frequencies, then measure added noise in the notches.
3. Use a baseband CDMA/LTE type signal and measure sideband added noise.

Of course subtracted effects may not show up :)
 
Last edited:
Very true, any residual is distortion, but of what? The A-D needs to have far lower than you expect from the DUT, or it’s the DUT.
In the comparison test each device shares the same A-D, D-A including the comparison benchmark re-record done without any preamp involved, which basically eliminates the interface from the test as the path is identical for each - any differences perceived are only from the DUT.
What I found in the DAW comparison with say LogicPro vs Cubase that the difference perceived sounded like mainly reverb signal content - were the pan laws different in Logic from record to playback? Certainly you would expect from any professional DAW to get back exactly what you put in if everything was set to unity gain. I repeated this same test with other software as well after seeing/hearing the results of the first and one thing was apparent that the errors were mainly in the mid-high regions, LF differences were less but still apparent and no two were the same. These differences are not necessarily distortion but possibly errors in reproduction whether in a DAW’s audio engine or a preamp.
 
Also I repeated the tests on DAWs originally done via D-A, A-D and sent them out and back in via optical with the same results. This on top of the original bounces straight to disk.
 
Since the 70s inexpensive off the shelf op amps were faster than audio signals.

I wouldn't call it synergy. The more appropriate term is called "loading" and yes different termination impedances can cause audible differences.

JR
As JR said... I've found that "popular", well regarded, Voodoo transformers can be made medicare by poor loading... Then there's the overlooked, unpopular units that can really sound good with proper (not necessarily spec) loading and Zobels.
 
I have shared this anecdote before, but back in the late 70s early 80s I was spending some time inside a friend's recording studio control room, tweaking a delay line/flanger design (Loft 440/450). Efx designs are pretty subjective but I was spending time in a real recording studio to get access to good source material, with accurate studio monitors for playback.

After a few hours the group was getting burned out and needed a cigarette break. Since I don't smoke I grabbed a truly mediocre cardboard cup of coffee from the machine in the break area.

Returning to the circuit tweak I was working on after a 15-20 minute break, what I was hearing was dramatically different from what I thought I heard before. I am not sure what to blame other than human fallibility.🤔 While that studio delay line flanger product was successful, it pretty much got spanked in the market by lower cost digital delay lines.

Since then I prefer to work with audio paths that can be objectively characterized with a competent test bench. I learned decades ago that my ears were not absolutely reliable. I mainly use my ears to confirm that I didn't neglect to measure something important. :cool:

JR
I worked for a speaker company. There was a new design in the works.. Everyone was gathered around listening to how much "space" they presented. I walked in and listened for ~3 seconds and pronounced they were out-of-phase.. Back to square one...
 
I worked for a speaker company. There was a new design in the works.. Everyone was gathered around listening to how much "space" they presented. I walked in and listened for ~3 seconds and pronounced they were out-of-phase.. Back to square one...
Actually L-R is used in surround sound decoders to suppress direct sound and enhance difference information that contains ambience and the like.
===
I was always apprehensive about uncontrolled listening tests that weren't properly set up and managed

JR .
 
That interview was done before Google and Wikipedia btw.
RN may have had had some weird ideas but certainly made an impact in designing some of the best consoles and preamps the world has seen, not only Neve but Amek as well and Focusrite which he founded, which none of us here would come remotely close to achieving. The 2254 compressor another example.
Skin effect IIRC only really affects conductors over 18mm dia - has little to do with affecting audio frequencies in small conductors - mainly becomes relevant in power transmission lines.
I think the relevance of ultrasonic capability in recording consoles and recording devices is maybe more to reduce unwanted artifacts/IM distortion etc. in the higher spectrum from finding their way back down into the audio spectrum and to our ears.
Personally I think ultrasonic perception is best left to bats and hearing aids.

Abstract from another research paper Martin L Lenhardt et.al. Biomedical Engineering Program Medical College of Virginia
“Bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects. When speech signals were modulated into the ultrasonic range, listening to words resulted in the clear perception of the speech stimuli and not a sense of high-frequency vibration. These data suggest that ultrasonic bone conduction hearing has potential as an alternative communication channel in the rehabilitation of hearing disorders.”
Neve gear was OK - I never said it wasn't. But I don't think many of their approaches were unique - RN's reference to "power amplifiers" in line output stages was certainly not unique or limited to his gear.

Skin effect was en vogue in the 1980s and 90s ... because despite it having been first postulated nearly a century earlier, it had gained awareness as a new "thing" which no one could argue with. I did specifically emphasise that I'd lifted the table from Wiki "because I'm lazy". I am. And my knowledge predated Google's establishment by some decades. Factually, skin effect has even less to do with 50 or 60Hz power transmission than it does to anything AF or higher.
 
Back
Top