Power rails on opamp

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I was actually wondering why the power supply to the preamp was lowered, after all this just sets the minimum/maximum output voltage for the opamps, and we'll never reach 18V (nor 15V) anyway.. I suppose there's might be some non-linearities when we're getting closer to the maximum/minimum capacities, but on a preamp we're always far from that..

Since most studio equipment runs on +/-18V (or +/-15V) and not +/- 3V, I suppose there's a good reason for that... !
This is a pretty old topic. The overall signal quality benefit from a few volts higher power supply is minimal to headroom, S/N, etc.

For perspective many A/D convertor ICs run on 5V PS rails.

JR
 
I don’t think the high rail voltage is for audio quality but to allow swing voltages to accommodate headroom requirements. The MCI consoles were designed to give +24dBm at any amplification output stage to directly drive high magnetisation 900nWb/M tape machines (+18dBm, +14 over the +4 standard, I think used for orchestral recording) to saturation so possibly the high rail voltages in the initial preamp stage are there as they are taken from common supply lines used all over the console. Also to take into account is the internal channel bus headroom which would also maybe require this.
 
I don’t think the high rail voltage is for audio quality but to allow swing voltages to accommodate headroom requirements. The MCI consoles were designed to give +24dBm at any amplification output stage to directly drive high magnetisation 900nWb/M tape machines (+18dBm, +14 over the +4 standard, I think used for orchestral recording) to saturation so possibly the high rail voltages in the initial preamp stage are there as they are taken from common supply lines used all over the console. Also to take into account is the internal channel bus headroom which would also maybe require this.
Not to feed this veer but back in the 1980s when I was still writing my "audio mythology" column for Record Engineer/Producer magazine, I did a comparison between professional mixers with +4 dBu nominal 0VU and semi-pro mixers with -10 dBV nominal 0VU. Long story short the -10dBV mixers had more headroom. The advances in low noise electronics since then makes the -10dBV gear look even better on paper. 🤔

Don't overthink this, at the end of the day your recording quality will be dominated by mic/mic preamp s/n and room s/n. Of course there are always ways to degrade the signal.

If you need to drive +24dBm into 600 ohms, we used to make and sell "bump boxes" to interface as needed. These seem to have gone out of fashion in the decades since, so maybe they are no longer needed. ;)

JR
 
Dead right - those were the days when consoles were governed by tape recorders and the S/N for tape - some manufacturers provided line driver modules for extended headroom. We don’t need that any more. Even then the -10dBV S/N in top hifi gear was better than a lot of “pro” gear at +4, but then the mixers only needed to have the sum noise of all channels driven, better than tape - the advent of digital recording changed everything.
 
There are many other facets to this. Such as an output stage that is designed to feed +24dBu into 600 Ohms is far more likely to be able to feed longer lengths of 'indeterminate' cabling around the studio which can then lead to better HF frequency response (less roll off and/or distortion) as well as the confidence it will actually drive at least a couple of 'bridging' inputs very well.
Using 'staer quad cabling which often has higher capacitance than other cables can put greater pressure on the hF/high current capability of an output stage (as a trade off for 'improved' balance which is only useful if BOTH ends of any link (and connectors etc en route have high CMRR. There was (and still is) a 'specmanship' game where every manufacturer likes to boast some aspect that is 'better' than competitors, whether it is relevant or not. Console mic preamps were expected to have EIN of 128.5dB (20-20K bandwidth terminated 150 Ohms) (with specified meter characteristics (RMS, Peak, Q peak) etc and CCIR or other weighting which can be used to emphasise or hide some possible 'issues' in some gear when you know what the numbers are actually saying.
 

Latest posts

Back
Top