Why's recording gear tend to have such high supply voltages?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

midwayfair

Well-known member
Joined
Apr 7, 2015
Messages
606
Location
Baltimore
I've been wondering why gear designed to deliver a signal of 3.472V peak to peak tends to run on supply voltages many, many times that high, usually 24-36V. It's going into digital converters that often don't want a signal even as high as 8V, which is a pretty hard ceiling even if transients are preserved by .

Do the high supply voltages have something to do with the high output current requirements for older equipment with very low impedance inputs? I know that the clean range of a device like a transistor or op amp is smaller than the supply voltages, but it doesn't seem like it would be 1/10 the supply voltage range even in the worst of circumstances.

The only other thing I could think of is that maybe it's to avoid excess heat from the power transformers dropping the voltage more, but then again high-current output stages probably tend to create more heat than a bigger supply voltage drop would.

I can't really find any explanations of this so I'd appreciate a bit of insight. Thanks!
 
The answer is Headroom..... +4dBu is the nominal operating voltage but if you want to handle transient signals 15dB above this then you need to be able to output +19dBu which is  6.9V rms or 19.53V peak to peak. This was set in stone many years before converters were even dreamed of.

Cheers

Ian
 
We used to go to Tape/Phono, where 15dB-20dB peaks were not always problems.

1.23V up 15dB = 7V rms, 20V p-p. Up 20dB, 35V p-p.

That's all on VU meters, where you don't really know your peaks, just estimate from the quasi-average of a mechanical meter and experience.

It also comes from older amplifiers which had rising distortion well below clipping. I want to drive a transmitter with +10dBm input, over a long line, with 3dB pads both ends of the line to mask impedance variations. I need 40mW. A simple tube power amp rated 50mW below clipping may have 5% THD. I may want under 0.5% THD. (Especially in network operations where many amplifiers may be cascaded.) I will probably need a 5,000mW amp to get THD at 50mW down that low. In fact it was common to use two 6V6 push-pull for heavy long-line drives.

Yes, in a day when "tape" is an ADC which clips at 5V p-p, and is 10K not 600r, this is all overkill.

But some intermediate processors (and some mix engineers) like much higher levels to stay above hiss and below overload, especially when the mixer can't watch all intermediate levels with only two eyes.

And because it is what your forefathers all did. Audio guys grabbed op-amp concepts. Tube op-amps swing +/-100V out. Transistor op-amp infrastructure scaled this to +/-10V out. Audio guys stole the modules/chips and the +/-15V supplies right off the B-B notebooks. Levels were similar to what was done before.

 
+1 to headroom (i.e. music has a crest factor not a simple sinewave), and

+1 to old legacy gear used higher voltage rails to stay well below clipping non-linearity, and

old legacy gear used higher voltage rails to keep signal above the not so modern noise floor.

Modern IC A/D converters  have such good low noise floors, and high linearity, that some can deliver full audio dynamic range using only a single 5V PS rail. 

JR
 
It's pretty standard for conventional analog circuits to run on +-15V (30Vpp). But if the ADC can't handle that range, I would think the signal is probably (or should be) being attenuated just before entering the ADC inputs.

More generally, high voltages provide better dynamic range. If you double the supply voltage that increases the circuit's dynamic range by 6dB. That's pretty good. Expecially in an older piece of gear where the components might not yield a really low noise floor. The total dynamic range depends on how quiet the circuit is (the noise floor) and how much signal it can handle (how wide the supply voltage is).
 
squarewave said:
It's pretty standard for conventional analog circuits to run on +-15V (30Vpp). But if the ADC can't handle that range, I would think the signal is probably (or should be) being attenuated just before entering the ADC inputs.

More generally, high voltages provide better dynamic range. If you double the supply voltage that increases the circuit's dynamic range by 6dB. That's pretty good. Expecially in an older piece of gear where the components might not yield a really low noise floor. The total dynamic range depends on how quiet the circuit is (the noise floor) and how much signal it can handle (how wide the supply voltage is).
If you increase rail voltages you generally need to amplify input signals more to fill up that larger dynamic range.  Since the circuit noise floor gets amplified along with the signal there is no net improvement to S/N.

There is some benefit when interfacing between different chassis to keep signal levels hot, to reduce impact of ground potential differences.

When summing together a large number of stems some extra headroom could be useful, but you still need to scale it back down to the recording medium dynamic range .

JR
 
JohnRoberts said:
squarewave said:
It's pretty standard for conventional analog circuits to run on +-15V (30Vpp). But if the ADC can't handle that range, I would think the signal is probably (or should be) being attenuated just before entering the ADC inputs.

More generally, high voltages provide better dynamic range. If you double the supply voltage that increases the circuit's dynamic range by 6dB. That's pretty good. Expecially in an older piece of gear where the components might not yield a really low noise floor. The total dynamic range depends on how quiet the circuit is (the noise floor) and how much signal it can handle (how wide the supply voltage is).
If you increase rail voltages you generally need to amplify input signals more to fill up that larger dynamic range.  Since the circuit noise floor gets amplified along with the signal there is no net improvement to S/N.

I'm not talking about changing the gain structure or anything other than the effect of increasing the supply range. Changing the supply range should have no effect on the noise floor but it will allow the circuit to accept a hotter signal. So I think by definition it will have a better S/N because S can be bigger and N stays the same.
 
squarewave said:
JohnRoberts said:
squarewave said:
It's pretty standard for conventional analog circuits to run on +-15V (30Vpp). But if the ADC can't handle that range, I would think the signal is probably (or should be) being attenuated just before entering the ADC inputs.

More generally, high voltages provide better dynamic range. If you double the supply voltage that increases the circuit's dynamic range by 6dB. That's pretty good. Expecially in an older piece of gear where the components might not yield a really low noise floor. The total dynamic range depends on how quiet the circuit is (the noise floor) and how much signal it can handle (how wide the supply voltage is).
If you increase rail voltages you generally need to amplify input signals more to fill up that larger dynamic range.  Since the circuit noise floor gets amplified along with the signal there is no net improvement to S/N.

I'm not talking about changing the gain structure or anything other than the effect of increasing the supply range. Changing the supply range should have no effect on the noise floor but it will allow the circuit to accept a hotter signal. So I think by definition it will have a better S/N because S can be bigger and N stays the same.
True, but where does the hotter signal come from?

It is safe to ASSume for modern solid state design that peak voltages will be limited to the IC process (roughly 36Vp-p).  A two legged active differential output can double that.  Tube gear with transformer outputs can be even hotter than that.

I recall when Peavey made their tube preamp (AMR VMP-2), I had to design a pad for our service guys to give to customers that were creaming their popular at the time Mackie 8-bus input stage (the Mackie didn't have an input pad).

JR
 
Semiconductor circuits can quite easily provide a low impedance output than can drive a 600 ohm load directly without using a transformer. Tubes have great difficulty doing this. However, they can swing a couple of hundred volts peak to peak no problem whilst consuming a modest current. Adding a step down transformer drops the voltage to the required level and ups the current drive capability by the same factor. The  source impedance drops by the same factor squared. This is the most straightforward way of getting tubes to provide pro output levels into 600 ohm loads. The gain of the tube amp needs to be greater to compensate for the transformer step down so the overall gain is the same and so is the noise. Many pro tube outputs can produce considerably higher maximum signal levels  than semiconductor outputs; +30dBu is not unusual which means they do have considerable headroom.

Cheers

Ian
 
There is even a higher one.

https://spl.info/en/products/headphone-amplifier/phonitor-2/120v-rail-technology.html

For obvious reasons I would like to refrain from giving my own opinion  but at the same time I would like to know what masters would think of it.
 
The simplest answer is that components are built they way they are built, and rather than make very special op amps powered by exactly +-7.39V supplies, or whatever crazy-specific level, we simply use what's out there to build audio circuits.

Sure, it seems "silly" to use an amplifier with +-15V supples to handle a signal that could be handled with a +-7.3V supply. But, what if the signal is a little too big? Then, the low voltage supply causes the circuit to fail miserably, and we'd want a +-8V amplifier, and so on. Using a +-15V amplifier for a signal that's "too small to justify that" will not fail, yet the opposite is true.

When recording, you can't put an absolute ceiling on a raw microphone signal as easily as you can put a ceiling on a finished mix. So, you really have no idea how large of a signal you'll be working with. It's good to have a margin of error, rather than ruin take after take getting the signal to exactly fit into a specific voltage range.

Another good technical reason to use higher supply voltages for linear amplifiers is that they perform better. Generally, these voltage questions are answered by what is needed to make the circuit components work best, and then the whole thing is adapted to fit some sort of audio application.

Audio circuit designers (and electrical engineers in general) rarely get to choose the devices they have available. Either technical limitations, the limitations of physics, or market and sales limitations restrict the range of components that we can use to make circuits. So, some of these factors are not at all controllable - for example, I cannot get a tube that works with a -15V plate supply, so I have to use one with something around a +250V plate supply, or perhaps a nearly unobtainable Nuvistor with a +120V supply. Or, just not build the circuit and wait for plasma beam tubes to be designed so that I can use negative plate voltages LOL! ;-)
 
> a higher one.

Bah. I built a headphone amp with +/-350V rails. (OK, there was a 6:1 transformer after.)

Many years later I used RANE data to figure a Real Frikkin LOUD level in about any available headphone. This is for monitoring in the same room with live band/orchestra. 7V rms out ought to blow your ears.

But headphone issues seem to be outside the scope of "recording gear", being an add-on rather than the direct path to product.
 
Monte McGuire said:
The simplest answer is that components are built they way they are built, and rather than make very special op amps powered by exactly +-7.39V supplies, or whatever crazy-specific level, we simply use what's out there to build audio circuits.

This is probably what the OP really wanted to know. The ADC the OP spoke probably runs on +-5V (the 8V cited is probably peak to peak voltage) so he's wondering if a typical high(er) voltage supply is needed. The answer is that it is not required. because there are op amps that work quite well with +-5V.
 
The question may come from another field, guitar-cord pedals often run on 9V supply.

Levels are different:

Pro: nominal 1.23V, nominally controlled
Guitar: roughly 0.2V, utterly un-controlled

Taking a 6:1 difference, if Guitar is OK with 9V, then Pro should be run with 54V.

Oh, signal swing is never equal to supply voltage. Many common chips won't swing within 2V of the rail, 4V total loss. So 9V Guitar pedal has potential 5V p-p swing, 30V Pro has potential 26V swing. 26/5 is 5:1 ratio, similar to the 1.23V/0.2V = 6:1 ratio of signals.

Note that internal levels may not be external levels. External outputs for nominal +4dBm are often connected "bridge", for effective 60V supply and over 40V p-p signal in low-Z loads.

Of course the real reason guitar cord devices use 9V is Pocket Radios. Their handy-size universally available easy-snap battery defined a lot of 1960s electronics. In recent decades wall-power supplies became common, but 9.15V is still Standard Voltage.

There is another factor in recording gear: Recording! A transient blip of clipping will go unnoticed in live performance. Put it on tape, some listeners will hear it over-and-over dozens or hundreds of times. Now a minor flaw starts to stand out through repetition and familiarity.

This seems to be the reason for one level shift. In the US the absolute reference was +8dBm (on VU meter), because that was Bell Tel's limit on phone lines such as used in radio networks. I think with tape, recording engineers noted coloration at that level on repeated hearing, and shifted down a few dB. The natural shift was to take the pad off the VU meter, +4dBm. (Naturally if they needed the last few dB to go over a dirty phone line, they could make it up just for that link.)

And recording engineers are supposed to control level more closely than performers. A revved-up guitarist may have notes 20dB higher than "concert average". VU + Peak meter watchers have noted that 16dB headroom over VU is ample 99.44% of the time. (Fussy 99.99%-ers take 18dB or 20dB so they never hear a clip in a lifetime.)
 
Back
Top