Preamp difference : if it's not the frequency, not the slew rate, and not the harmonics, what is it ?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
..recovering from overload in an graceful way is probably an underestimated parameter when it comes to preamps. Somewhat in line with the soft-clipping suggested by JR earlier

(great thread, btw..!)
In my experience recovery from overload is more of a concern in audio power amplifier design than in preamps, due to driving high currents into low impedance loads. Current limiting can add another deviation from ideal behavior that requires benign recovery. Non-symmetrical current limiting can also generate a DC output component that can damage loudspeakers. Recovery from clipping (saturation) and/or slew rate limiting, should be fast with minimal artifacts created.

Some power amps routinely include fast clip limiters. Few preamps do, I already noted one exception (a mic preamp with soft limiter).

JR
 
NFB helps reduce noise distortion and output impedence under normal circumstances ,
but when an amp is driven into clipping the abillity of the feedback to correct breaks down ,this prolongs the recovery time of the amp after a transient event , making any distortion more audible and objectionable .

Transient events happen in the real world , the mic amp being first in the program chain , it does matter how it sounds and recovers from short term overloads .
 
I saw multiple claims that it's "a set of factors" that influence the sound, but do we have actual measurements of that ? Especially when it comes to transient response, nothing is clear. Maybe I did not find the good studies, and of course it makes sense that many parameters can influence the result - but I'd like to see some clear comparative study of, say, a tube preamp, a clean solid state one and a colored solid state one with transformers, that would highlight what is actually different. It doesn't have to be exhaustive, but I wish it could show more than simply "a different frequency response".


I did a null test for instance between two opamps transformers on my Warm Audio TB12. There is a difference in the mids/high-mids in particular, but it doesn't sound particularly different to me. So I'm not sure what to conclude : maybe phase is affected, but it doesn't change the sound ? It's a rabbit hole lol.
Some parameters are easily measured others are only detectable with the ear.

You can measure transient response and overload recovery with square waves, gated sine or other waveforms and wideband DSO.

What maker's preamps do you want to compare? There are only a few thousand ever made, each with their own set of parameters. I don't think there is one that can represent a whole category such as solid state, tube, discrete SS, Opamp, etc.

Multiply that with what application you will use it for: drum OH, vocal, bass, bird calls, piano, synth. You'll have millions of comparisons and you wouldn't fully cover the field as so many preamps are unobtanium.

And thru what audio chain are you auditioning the equipment? A board, direct into speakers, electrostatic headphones, ad infinitum . . .

And the bottom line is what you hear anyway, what's the best tool for the job at hand and that comes with experience and maturity to be able to make good judgement calls.

From my short career in my NYC studio recording jazz I observed that not many people care about what you have as long as it sounds good enough and they think they are playing well. But that's a small niche of artists. I think pop music is a whole other kettle of horses and fish of different colors.

I don't know what else can be said about this ad infinitum topic. But I'll watch anyway.
 
NFB helps reduce noise distortion and output impedence under normal circumstances ,
NF clearly reduces distortion and output impedance. Noise not so much. Reduced closed loop gain will have lower output noise than running WFO but has less signal too. 🤔
but when an amp is driven into clipping the abillity of the feedback to correct breaks down ,this prolongs the recovery time of the amp after a transient event , making any distortion more audible and objectionable .
This is a well known phenomenon to circuit designers. As I shared before saturation can be more difficult to manage in higher current power amplifiers. In a high power amplifier multiple different stages can contribute issues. Too much lag from a dominant pole compensation integrator can continue to drive the following stage up/down after slew rate limiting subsides or banging into a rail tops. This excess interstage drive until the output catches up and/or starts moving in the right direction again takes time to settle out. Poorly designed unstable paths can ring or overshoot during this recovery period. In addition output power devices allowed to saturate will have a recovery time associated with discharging the built up base charge in each power device. A successful local device remedy is connecting an anti-sat diode from base to collector. This diode clamps the junction, prevents device saturation, and supports near instantaneous recovery.
Transient events happen in the real world , the mic amp being first in the program chain , it does matter how it sounds and recovers from short term overloads .
I advocate use of a passive LPF pole on raw audio inputs. I've shared this before but Marshal Leach (RIP) published a short AES paper describing a NF amp topology that delivered a rise-time limited response. Effectively just like a one pole LPF response. If interested do a search here I've talked about it here before.

I make a point of not arguing with people on the WWW about things they say that they hear.

JR
 
There is a LOT to bite off and chew on this/these question(s). It is almost like asking 'Why does one preamp sound different than another?' Almost everything in preamp changes the sound in/out. I will get to what I think the basics are. Cheaper preamps have gotten pretty good and transparent. Scarlett/Focusrite originally seemed to focus on a just a clean decent cheap transparent unit and hence it's good market share. The 312 has a lot more options and circuitry going on. I think the 312a would be a much better comparison (500 series) just because it is more minimal. OPs have said the mic (in how it 'fit's' with a preamp) is a major contributor to differences in sound and I agree. Change one component and it usually changes everything. You could get into impedence and load and heat and ADC/DACs and even the algo's that interpret the analog to digital and digital to analog translations and on and on.

I think EVERONE has wrestled with the tech and science vs just the sound (especially when one desires to know how a preamp works etc.) Sound may be the most somplicated physics in the universe. No one can know it all ~ we are talking lifetimes. Even if we start talking about the physiology of how a specific person 'hears' and how their brain digests the sounds and varying natural 'ear compressioning' of sounds. Then you have left and right brain differences etc.

If I had to give one video that would layout the basics into an understandable form it sould be this one. It briefs over old tech vs new tech, impedence changes, loads, caps, fast vs slow etc. I would watch and listen to this vid a few times and I believe you will get it. Aspen Pitman was great at communicating and explaining in a very laid back way... Here its is .

In my travels I always ask my self 'why do I want know this?' 'for what purpose?'. Then I do the work of boiling it done to the 'clearest' simplist question (Einstein said it well) A carefully constructed question when correctly analyzed will reveal the answer... and the work and effort is asking and constructing the perfect question. I work with a lot of people who use cheap cheap inexpensive preamps and equipment... and when it is in the right hands it is VERY VERY hard to actually determine which song production is better. The one done with a few hundred dollar set up or the top notch A-List studio set up.

I am a pragmatic thinker and like to set up the visuals in my mind first. (this entails stepping way way back from ones usually narrowcast perspective) First we we have a 3 dimensional sound growing and emanating from a sound source (think of it as a soap bubble?). Then the enviromental acoustics (room) affect this sound (by changing AND reflecting, absorbing certain sounds back WHILE intermixing these sounds in different ways). We capture this sound through a sound capture device (or microphone). Then the sound is translated to a digital interpretation ADC. Then we take this interpretation and translate it back to an analogue (DAC) playback system to hear it. Then we have our personal and much different physiogy feeling/hearing it.

It is like a big loop where everything affects everthing else. I hope this helps; but this is how I got my arms around a highly difficult phenomena to explain and in turn communicate : )
 
That is extremely wise. However, I'm prepared to have a damn good row with people about whether things can be measured 😉
I've never heard something that I couldn't measure (while it sometimes involved rolling my own test setups).

I have measured many things that I couldn't hear.

JR

PS: But maybe I just have sh__ for ears. 🤔
 
There is a lot wrapped up in that single statement you made.
First would be to check whether different people actually describe the sound of the preamp the same way.
In the specific case of API preamps, they are accepted as "punchy" just because many people said so. It takes only one person to start a rumour.
Personally, I have never been particularly impressed by them. Not that they are bad, but they just don't sound significantly better than many others.
In order to have something sounding punchy, you need a punchy source, good frequency response, good EQ, good speakers.
 
Well not that much, it seems. I've done a lot of research online, and apart from "beliefs" and "feelings", I was not able to find science-backed answers.
I would suggest you search some more. There are several learned papers on this topic, one of which involves different transfer functions and double blind testing (as I hinted earlier in this thread). I used to have this on an old disk drive - I will see if I can find it but there are definitely others out there.

Edit: A very quick search revealed these:

https://www.axiomaudio.com/blog/distortion

https://www.hifivision.com/threads/audibility-of-total-harmonic-distortion-a-test.87742/

https://www.kv2audio.com/technical-...s show that odd,distortion is audible form 1%.

Cheers

Ian
 
Last edited:
I would propose a study of artificially generated insertion of specific harmonics and develop a scale on their effect on perceived sound.
It has been done in the 80's. An article in the AES journal described a set of experiments where dominant types of distortion were evaluated, crossover, GBW limit, slew-rate limit, predominantly 2nd harmonic, predominantly 3rd harmonic...
The problem was that in order to distinguish the different types, the distortion had to be pretty high, much higher than what is typically encountered in a well designed product.
 
It doesn't have to be exhaustive, but I wish it could show more than simply "a different frequency response".
It's been said already that frequency response difference that look almost unsignificant can result in singularly different perception.
But you can't neglect distortion, with all its variants, not only harmonic, but also intermodulation and transient. Limiting measurements to 20kHz may also hide some artefacts.
I did a null test for instance between two opamps transformers on my Warm Audio TB12. There is a difference in the mids/high-mids in particular, but it doesn't sound particularly different to me.
Whatever difference there is may or may not be significant.
The problem with null tests is where you put the reference. If the reference is in the midrange, you will probably have a difference signal that contains LF and HF. If you put the reference ot 60Hz, the difference will probably be in the mids and highs. Even if you match the levels within 0.05dB, the difference will be 60dB down, which is quite audible.
Then, are both channels used to create the difference close enough?
So I'm not sure what to conclude : maybe phase is affected, but it doesn't change the sound ? It's a rabbit hole lol.
Phase is quite often presented as suspect, mainly becaus many people don't really understand what it is; they brandish that as a magic token that "explains" everything just like magic explains the universe's mysteries.
Until now, I've never met a proof that phase was audible outside it's possible effects on an imperfect medium, typically when the change of peak factor results in overdrive.
 
Psychoacoustics is a minefield. Assuming all ears and acoustic sensitivities are the same would be a mistake.
What is acceptable quality to one target market may be abhorrent to another.
If all the facts were known there would be little to talk about.
AP would have all the answers.
 
Psychoacoustics is a minefield.
Just another tool to use after understanding. I have used some psycho acoustic properties mostly in design of dynamic processing side-chains. These are/were pretty rigorously studied.

Back in the 70/80s I dabbled in Surround Sound, and delay based studio efx.
Assuming all ears and acoustic sensitivities are the same would be a mistake.
In no great surprise I have opined about this before. Back in the 1980s or even before, there was a contentious public debate between "Golden Ears" and "Meter Readers". I don't recall my exact quote but it was something to the effect that we designers should strive to understand perceptual phenomenon and use these as tools to improve customer's product satisfaction.

Over the decades I had many deep discussions with a fellow console designer (now RIP) about ergonomic factors and their influence on console quality perception.
What is acceptable quality to one target market may be abhorrent to another.
If all the facts were known there would be little to talk about.
At the risk of repeating myself, many facts are already well known. I am not sure what it is about the high end audio business that feeds this perception that we are still dealing with undiscovered audio factors. I blame marketers whose job it is to gin up merchantable selling propositions (differences?) between otherwise identical audio paths. These millions of dollars spent selling etherial differences, feeds the belief that there is some actual there there.
AP would have all the answers.
The answers are out there but not easily categorized. I have stopped paying attention but there were lots of papers published regarding psycho acoustics in the AES and other technical journals going back several decades ago.

JR
 
It might very well be a matter of taste...

In the 60's the BBC did a lot of research about high fidelity sound. One of the results is the famous line-up of BBC monitors. An unexpected result is that engineers aren't useful as guinea pigs. Nor were musicians. The only test group that was able to produce consequent results were the live music lovers. That might render most of the other research (that I know of) very unscientific.

The problem with the engineers was that they heard problems allright, but on every repeat of the test, they found a different problem. The problem with musicians was that they tended to listen to one instrument (besides of having hearing problems).

Just look at audiophiles' setups. You can almost tell what genre music they prefer only from looking at the speakers. Consider classical music, Reggae and Metal. Not too many will have comparable speakers. And some only have those speakers because some influential person also had those.

So there might not be a universal answer.

Also, I'm talking about speakers here. In my mind, microphones have the same problem. That becomes clear as vloggers are changing the market. Big brands used to have the market cornered. Now they are bleeding, cause these vloggers don't have the money for a Schoeps, or a Neumann.

It seems the Neewer BM800 has resulted in a lively DIY market for mics. Something unheard of in the past. Sure, there was some DIY, but these days, YouTube is filled with a fast growing number of mic DIY vids. And the shops with mic parts are mushrooming.

Maybe the mic preamp isn't as important as we think? Could it be we're chasing a ghost of yesteryear?
 
I saw multiple claims that it's "a set of factors" that influence the sound, but do we have actual measurements of that ? Especially when it comes to transient response, nothing is clear. Maybe I did not find the good studies, and of course it makes sense that many parameters can influence the result - but I'd like to see some clear comparative study of, say, a tube preamp, a clean solid state one and a colored solid state one with transformers, that would highlight what is actually different. It doesn't have to be exhaustive, but I wish it could show more than simply "a different frequency response".
If you want to make sense of the differences between pieces of gear, and how to quantify them, you might want to look at the people who make their living doing so. Take UA, for instance, and their plugins. For a couple of decades now they've been having great success modeling various bits, from preamps to tape machines and getting surprisingly close to the character of the originals, and hence, the differences. To achieve this, they must be quantifying those differences to be able to apply them in DSP. I gather much of their work is physical modeling, which may or may not create answers to your questions, but they must at least be able to take those models and discern the changes, and that ought to. Perhaps taking a dive into that pool would be illuminating.

Things like transfer functions in both the time and frequency domain. Impulse response. Harmonic content, both static and dynamic. Intermodulation characteristics. How any given component acts approaching clip, and after. All these things play a part. No, I don't have the answers to your questions about what exactly the differences are, outside of what else has been said. But hopefully looking at people who've done it can point you towards a constructive line of questioning.
 
I'm not sure what to think of the recent development in generative pre-trained transformers, but a local one here suggested transient distortion as a parameter worth pursuing:

"Transient distortion refers to the alteration or degradation of transient signals in audio electronics. Transients are short-lived, dynamic bursts of sound that occur at the onset of a musical note or sound event. Examples include the initial attack of a drum hit, the pluck of a guitar string, or the striking of a piano key.

In audio systems, transient distortion occurs when the system fails to accurately reproduce these transient signals. This distortion can manifest in various ways:

  1. Smearing or Blurring: Transients may lose their sharpness or definition, resulting in a blurred or smeared sound. This can happen due to slow response times in amplifiers or signal processing circuits.
  2. Overemphasis or Accentuation: Transients may be exaggerated or emphasized beyond their original level. This can occur due to non-linearities in amplification stages or excessive signal processing, leading to unnatural dynamics.
  3. Clipping: If the amplitude of a transient signal exceeds the maximum capability of the audio system, clipping occurs. Clipping chops off the peaks of the waveform, introducing distortion characterized by harsh, unpleasant artifacts.
  4. Transient Intermodulation: Transients may interact with other signals in the system, leading to intermodulation distortion specific to transient signals. This can result in complex, non-harmonic distortion components that degrade the fidelity of transient reproduction."
..and I tend to agree with her

/Jakob E.
 
www said:
"Transient distortion refers to the alteration or degradation of transient signals in audio electronics. Transients are short-lived, dynamic bursts of sound that occur at the onset of a musical note or sound event. Examples include the initial attack of a drum hit, the pluck of a guitar string, or the striking of a piano key.

In audio systems, transient distortion occurs when the system fails to accurately reproduce these transient signals. This distortion can manifest in various ways:

  1. Smearing or Blurring: Transients may lose their sharpness or definition, resulting in a blurred or smeared sound. This can happen due to slow response times in amplifiers or signal processing circuits.
  2. Overemphasis or Accentuation: Transients may be exaggerated or emphasized beyond their original level. This can occur due to non-linearities in amplification stages or excessive signal processing, leading to unnatural dynamics.
  3. Clipping: If the amplitude of a transient signal exceeds the maximum capability of the audio system, clipping occurs. Clipping chops off the peaks of the waveform, introducing distortion characterized by harsh, unpleasant artifacts.
  4. Transient Intermodulation: Transients may interact with other signals in the system, leading to intermodulation distortion specific to transient signals. This can result in complex, non-harmonic distortion components that degrade the fidelity of transient reproduction."
[TMI warning]
This is another area where psycho acoustic research may give us some ideas

Human hearing tends to group together very brief percussive transients into single events known as the "precedence" effect.
www said:
The Precedence effect was described and named in 1949 by Hans Wallach1 and fellow researchers. They showed that when two identical sounds are occurring in close succession they will be heard as a single sound event. In their experiments, this fusion occurred when the delay between the two sounds was between 1 and 5 ms for clicks, and up to 50 ms for more complex sounds. In fact, this range is signal dependent, and for speech the Precedence effect disappears for delays beyond 50 ms, and for music the effect can disappear for delays of even 100 ms. Basically, when the delay is sufficiently long, the second sound is heard as an echo.

They also showed that when successive sounds coming from sources placed at different locations were heard as fused, the apparent location of the perceived sound was dominated by the location of the sound that reached the ears first (therefore the name ‘precedence’). The second and later-arriving sounds had minimal effect on the perceived location of the fused sound. This phenomenon plays a key role in our hearing abilities in enclosed rooms, as it remains possible to determine the direction of a sound source (e.g. the direction of a loudspeaker or instrument) even in the presence of various wall reflections.
https://blogs.qsc.com/live-sound/what-is-haas-effect-and-how-to-take-advantage-of-it/

Then there is the "Haas" effect

www said:
The Haas effect derives from a 1951 paper by Helmut Haas2, where he examined how the perception of speech is affected in the presence of a single, coherent sound reflection. He confirmed Wallach’s work showing that we localize sound sources in the direction of the first arriving sound despite the presence of a single reflection coming from a different direction.

He further demonstrated that a reflection arriving later than 1 ms after the direct sound increases the perceived level and width of the sound source. Additionally, a single reflection arriving within 5 to 35 ms can be up to 10 dB louder than the direct sound without being perceived as a secondary auditory event (echo).

To summarize, the Haas effect is a special appearance of the Precedence effect where the range of delays between a direct sound and a reflection leading to a single perceived sound event is reduced below 35 ms.

This is more of a concern for mixing recording (using delay) and home theater or surround sound installations.

Transient responses are a major design consideration in dynamics side chains. Gain elements have finite attack times so gain can not be changed instantaneously often resulting momentary overload.

I spent a lot of brain cells on designing companding noise reductions. These involve input compressors typically operating at 2:1 ratios. On the output side 1:2 expanders restored the original signal dynamics. The transition between quiet input passages and loud momentarily transients separated the wheat from the chaff in compandor designs.

To help parse out the sundry artifacts caused by side chains not keeping up with signal changes I had to make them repeatable. To that end I designed a custom tone/signal burst gate. Instead of just switching between full off and full on, I added in a fraction of dry un-gated signal. this allowed me to replicate the familiar tone burst signal and a range of signals with gain steps. Tone bursts are good for looking at on an oscilloscope but making gain burst with complex music helped me parse out the weak links in sundry dynamics processors. [/TMI]

Sorry for the veer, this is a mature topic.

JR
 
I suspect one potentially-instructive test would be a sighted one comparing two brands of preamp (say “neve” and “api”)

But in the second round, switch the front panels…

See if the listeners’ perceptions track the circuit or the front panel, and then you’ll have some clue as to whether the verbal descriptor (e.g. “punchy”) was applied to something real or imagined/projected
 

Latest posts

Back
Top