When to go discrete and when not to go discrete

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

pucho812

Well-known member
Joined
Oct 4, 2004
Messages
14,831
Location
third stone from the sun
This has me puzzled. Looking at a console schematic of a custom board that is full of discrete opamps. The board it self runs bipolar 24v and is basically split in 2  Sections.  Channel with pre and eq, then lower half which is busses,auxes,stereo bus, Pan, solo,mute with direct out aside from assignments.  The designer is using 2 doa’s per lower half. Differential input into one fader inbetween the 2 doa’s.  The direct out is taken after the fader and is a 1:1 transformer. The auxes are pre/post fader. 
Sonics aside wondering why the choice to  use discrete opamps. It’s very api ish design.
Any reason to do so in such a capacity?  Unfortunately I can’t share the schematic but I will draw out a simplified version should people be curious....
 
One reason to use a descrete op amp is where you need some serious drive capability. many API desks had serious numbers of AUX sends and buses via pan pots. The total load of all these in parallel has to be driven with minimal distortion up to near the rails if headroom is to be maintained. Often the the total load is well below 600 ohms.

Anywhere else that does not have this kind of heavy load expectation can sensibly use an IC.

Cheers

Ian
 
Marketing?  Modern op amps will generally spank DOAs in every metric, but guess what? Customers do not care...

Tell them you can swing 48Vp-p... they will give you their pin number.  ::)

You mentioned API, that was their marketing hook (high rail voltage/headroom?) for decades.

JR
 
pucho812 said:
This has me puzzled. Looking at a console schematic of a custom board that is full of discrete opamps. The board it self runs bipolar 24v and is basically split in 2  Sections. 
Pacific Recorders ?  918's ?  Deane liked 1:1 and bipolar 24.
pucho812 said:
.It’s very api ish design. 
API never employed high rail voltages like Quad/Eight.  bipolar 15 or 16, like a humble Soundcraft.
What sets vintage API apart from the rest of the is their usage of step-up output transformers.  Bus outputs were typically 1:2 though I've seen 1:3 in at least one vintage desk.  With the exception of the Neve 10xx modules, the most successful  eq design of the past 50 years is the API 550 family:  they all use a 1:3 output transformer.    Which is one reason why discrete op-amps with a powerful output stages exist.
Let the arguments begin, but just maybe the headroom achieved by the step-up transformer is why the API "sound" is so desirable.
 
gridcurrent said:
Pacific Recorders ?  918's ?  Deane liked 1:1 and bipolar 24.API never employed high rail voltages like Quad/Eight.  bipolar 15 or 16, like a humble Soundcraft.
What sets vintage API apart from the rest of the is their usage of step-up output transformers.  Bus outputs were typically 1:2 though I've seen 1:3 in at least one vintage desk.  With the exception of the Neve 10xx modules, the most successful  eq design of the past 50 years is the API 550 family:  they all use a 1:3 output transformer.    Which is one reason why discrete op-amps with a powerful output stages exist.
Let the arguments begin, but just maybe the headroom achieved by the step-up transformer is why the API "sound" is so desirable.

No not a pacific recorders.  This a custom jobbie I was lucky to work on.  If I do the math I got some 80 discrete opamps in just a 32 channel design.  Pricy...
 
pucho812 said:
If I do the math I got some 80 discrete opamps in just a 32 channel design.  Pricy...
That's the big issue; because of cost designers had to reduce the number of active stages, which IMO leaves to be desired. At a time monolithic IC's were not so good, so DOA's made sense , but today, apart from the 3dB additional headroom due to higher rails, it's just nostalgia and marketing.
BTW, transformers do not really provide significantly higher headroom, because noise is increased eaqually.
 
abbey road d enfer said:
...BTW, transformers do not really provide significantly higher headroom, because noise is increased eaqually...
Could you please explain this in a little more detail? What's a relationship between headroom and noise here?
 
moamps said:
Could you please explain this in a little more detail? What's a relationship between headroom and noise here?

headroom means different things to different people but generally describes a peak level to nominal, sometimes confused with dynamic range (peak to noise floor).

A transformer wired to boost the voltage swing, will boost the nominal level and noise floor at the same time, for no net increase in headroom or S/N. If you redefine the nominal level relative to this boosted output, the noise floor that is also boosted a like amount creates a 1:1 decrease in effective S/N.

Headroom is one of those fuzzy marketing terms thrown around without much explanation. All else equal higher rails that deliver more output swing without increasing the noise floor can be a net gain, but S/N performance of modern low noise ICs is quite good.

JR
 
moamps said:
Could you please explain this in a little more detail? What's a relationship between headroom and noise here?
I believe you know what I mean. :)  Headroom as a single figure, like 30Vpp or +26dBu doesn't mean much if the complete picture is not drawn. Noise and THD are to be put in the same picture; that's why I wrote "significantly higher headroom", meaning headroom that provides actual better overall performance. In today's world of 5V converters, what's the meaning of that kind of max level? In particular higher level obtained consanguinally to higher noise is questionable.
 
I do not think there is anything confusing about headroom. it is simply the difference between the maximum possible output level and the nominal operating level. Back in the Neve days, the maximum output from a console was +26dBu. Most were fitted with VU meters on the output with a nominal operating level of +4dBu. This gives a headroom of 26 - 4 = 22dB.

Most op amps these days are lucky to be able to output a maximum +22dBu before clipping so if you retain a nominal operating point of +4dBu then the headroom drops to 22 - 4 = 18dB.

VU meters are notorious for under reading peaks in music so the headroom tended to be set high enough above the nominal level to ensure that real peaks did not cause any distortion or clipping. Of course opinions varied about how much headroom was required to achieve this and VUs under read different amounts for different instruments. Back in the days of tape, it was generally agreed that 15dB was sufficient but tape could usually be driven beyond this for special effect so mixers tended to have another 10dB to spare so the tape distorted first.

These days with rock hard digital ceilings you really need to know what the peak level is and set the nominal level a fair way down from clipping. There are various standards, -15dBFS being common but also -18dBFS and -22dBFS  are used.

Cheers

Ian
 
ruffrecords said:
I do not think there is anything confusing about headroom. it is simply the difference between the maximum possible output level and the nominal operating level.
A true, academic definition. My answer was based on the practical value of such headroom that is obtained by step-up xfmrs.


Back in the days of tape, it was generally agreed that 15dB was sufficient but tape could usually be driven beyond this for special effect so mixers tended to have another 10dB to spare so the tape distorted first.
Utterly inadequate in the realm of digital recording.

These days with rock hard digital ceilings you really need to know what the peak level is and set the nominal level a fair way down from clipping. There are various standards, -15dBFS being common but also -18dBFS and -22dBFS  are used.
And you need peak-meters. VU--meters are like odometers used to measure instant speed.
 
Wasn’t the EBU  broadcast standard 12dB above nominal headroom? The older Neumann gear I have that runs on a +24VDC rail meets that spec but doesn’t have the step up output of a Neve 80 series.
 
Gold said:
Wasn’t the EBU  broadcast standard 12dB above nominal headroom? The older Neumann gear I have that runs on a +24VDC rail meets that spec but doesn’t have the step up output of a Neve 80 series.

Broadcast specs were different. They had a very hard clipping point that they were not allowed to exceed because doing so would result in splatter over adjacent stations. So they all included a stiff limiter in the signal chain and many (certainly all those used in the UK) were fitted with PPM meters rather than VUs. I seem to remember a lot of the early RCA broadcast mixers operated at +8dBm with a maximum output of +18dBm.

Cheers

Ian
 
Discrete still has the edge for mic pres imo. And of course high current drivers. I'm generally satisfied with ICs for most line stages.

I've found higher rails can sometimes make a difference in performance,  even if you only need something like +18dBm.
 
> early RCA broadcast mixers operated at +8dBm with a maximum output of +18dBm.

I suspect the stage which worked nominal +8 were *rated* +18 @ 0.5%THD but would do considerable more before clipping.

And as you-all say, in Radio when you pass 9,999 Watts on a 10,000 watt transmitter, audio-chain distortion no longer matters. (You can't hear the distortion because adjacent channel stations are complaining too loud.)
 
abbey road d enfer said:
That's the big issue; because of cost designers had to reduce the number of active stages, which IMO leaves to be desired. At a time monolithic IC's were not so good, so DOA's made sense , but today, apart from the 3dB additional headroom due to higher rails, it's just nostalgia and marketing.
BTW, transformers do not really provide significantly higher headroom, because noise is increased eaqually.

so a less expensive alternative is the IC chip with discrete follower.
 
pucho812 said:
so a less expensive alternative is the IC chip with discrete follower.
It's one possibility, but one has to make sure the discrete follower does not impair performance. There are actually a number of monolithics that can drive 600 ohms at ease. I believe most applications can conform to that constraint.
 
PRR said:
> early RCA broadcast mixers operated at +8dBm with a maximum output of +18dBm.

I suspect the stage which worked nominal +8 were *rated* +18 @ 0.5%THD but would do considerable more before clipping.

And as you-all say, in Radio when you pass 9,999 Watts on a 10,000 watt transmitter, audio-chain distortion no longer matters. (You can't hear the distortion because adjacent channel stations are complaining too loud.)

I have often wondered just how much distortion is introduced by the modulator itself in a 10,000W AM transmitter. FM is much less of a problem becuase it can be done at low level but an 10KW AM modulator is a whole different ball game.

Cheers

Ian
 
ruffrecords said:
Broadcast specs were different. They had a very hard clipping point that they were not allowed to exceed because doing so would result in splatter over adjacent stations. So they all included a stiff limiter in the signal chain and many (certainly all those used in the UK) were fitted with PPM meters rather than VUs. I seem to remember a lot of the early RCA broadcast mixers operated at +8dBm with a maximum output of +18dBm.
AFAIK, the German broadcast reference was +6dBu for a long time. Nowadays almost all FM oriented broadcast consoles are initially set to +4dBu reference. In early days peakmeters are used because the output peak limiters where crude and aggressive, now we are using complex digital processors  as OPTIMOD or OMNIA etc. consisting of, for example, two band AGCs, 5 band limiter, 5 band EQ, dynamic pre-emphasis, soft clippers and hard clipper at multiplex output.  You can't easily overload it  :).

The biggest challenge in levels in FM is pre-emphasis (50uS in Europe, 75uS in USA) which rises the HF content very much, followed by a HF low pass filter at 15kHz with a minimum attenuation of 60dB at 19kHz  (very complex brickwall elliptical filter) where the stereo pilot sits.  In early days, if the audio signal is too high and you have a high content in HF range, the pilot will be modulated with audio signal and then stereo receivers go out of sync (in the rhythm of cymbals :) ).

Right now is better to have VU meters so talents or DJs (there is usually no more audio engineers to control and mix the program)  can set the volume level constant, a processor will care about rest.

Sorry for off topic.
 
moamps said:
AFAIK, the German broadcast reference was +6dBu for a long time. Nowadays almost all FM oriented broadcast consoles are initially set to +4dBu reference.
Until the 70s, everything was run by the BBC. There were no independent radio stations in the UK. I do not think there were any published specs but the BBC made a lot of their own gear anyway.  All local radio was BBC only. Then in the 70s independent radio was introduced and the IBA was set up to set and monitor standards. Neve made a lot of consoles for independent radio and they all had PPMs and operated at +4.

Sorry for off topic.

Cheers

Ian
 
Back
Top