2520 LOSSLESS COMBINING AMPLIFIER

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

altitude

Member
Joined
Dec 10, 2009
Messages
15
Hello there  ;D

The plan is to build a passive summing network using an old patchbay with switches for L/R in/out.

Basically a DIY Folcrom.

For makeup gain I have been hoping to use some 2520 opamps that were part of a custom group order some years ago.

Looking at the spec sheets for the 2520

api2520_02.jpg


We have figure 3: a Lossless Combining Amplifier

This seems to fit the bill perfectly.

I have some Cinemag output transformers that were intended for 312 amps

Am I correct to think they should bolt on the end of that diagram to give me balanced out?

Also my inputs will be mainly from balanced sources 1272 amplifiers + the outputs on a layla 24 / B-ADA8000

How would I take my balanced outputs into each single 100k resistor?

Also the spec sheet states the 2520 will run upto 20v. Anyone see any problems with the 2520 that I have?

They were part of a large group buy that I think Fabio manufactured a couple of years ago.

I have been out of the loop since then and don't know if there were any issues with these amps...

Any help appreciated

Thanks

Dave


 
That's not a passive summing network.
It's active summing. Out transformer should be fine.

As far as balanced outputs: it all depends on the output circuitry of the actual unit you're connecting.
Transformer balanced requires the -out to be grounded, otherwise you won't hear anything under a few Khz
Cross-coupled circuits are supposed to have the -grounded too in order to work properly and to compensate gain.
Electronically balanced circuits (just one opamp per +/-) generally don't like to be grounded on one side. They won't compensate gain either.
 
The patchbay with the switches would be passive summing wouldn't it?

So then I place the array of 100k resistors in my patchbay and send the summed output into the 47uf capacitor on the negative input of this amplifier circuit.

That will complete it as active summing...?

I guess if I tie the negatives outputs to ground from the 1272s that should work then

The Lalyla 24 and B-ADA8000 may be the issue with the electronic balancing?

Will I be okay running these 2520s at 20v?
 
Had a look at that site, not much more info on the summing side than the datasheets I already have.. unless I'm missing something? Thanks though....

Any thoughts on the actual circuit?

It is quite stripped down compared to the actual 325 circuit. Will it be as stable?
 
I am more than a casual student of summing amp topology.

That "lossless" combining amp is just a bread and butter virtual earth sum amp.

While it may appear lossless compared to a passive combining network, the make up gain is literally being provided by the inverting summing amp.

The noise gain for a virtual earth topology is N+1, with N being the number of inputs. This is actually an insignificant amount worse (N vs N+1) than passive bus, plus a makeup gain stage.

I developed a truly lossless sum bus topology by replacing the bus input resistors with current sources. In practice I didn't get the bus amp noise gain down completely to unity, but I did reduce it tens of dB compared to virtual earth or passive plus make up.

I described the general approach in a magazine article (RE/P 1980) and used the technology in consoles from LOFT and AMR. (please don't buy one now and ask me to help fix them.. they're all old and tired now, and I have better things to do).

I don't understand but wouldn't mind if there was some real need for OTB mixing, because I have a solution looking for that kind of problem. In theory digital combining of multiple streams generates more data and more resolution, so any deterioration of sound quality has to be some quirk of current technology not keeping up with that many paralell digital streams simultaneously, or magical thinking associating some other effect to the wrong cause.

JR

 
Thankyou for your input here John.

However I'm now slightly confused.

Are you saying this is a bad idea?

My need for OTB is to make use of some external lineamps, namely 8 1272s, plus some other odds and sods that I might use for effect while mixing.

Would I be better of with a passive network and building a 325 amplifier circuit for gain makeup?

I thought that was what I was 'nearly' doing here anyway?
 
Actually I'll jump in here on the OTB vs ITB summing point John brought up.  I, too, have a vested interest in summing topologies having built my own product line to do that very thing over the past several years.

So to add to the confusion of why OTB is better than ITB, I'll just say this.  Things have gotten much better with digital summing over the past 2-3 years.  Why?  Because the internal summing mixers for most DAWs have migrated to a 64 bit floating point summing bus as opposed to the 32 bit floating point summing bus of year prior.  What this means is that there are 32 more bits of precision for digital summing calculations, which means that the errors generated from rounding in the mantissa result in a lot less "significant" bits being dropped from the signal as a result of the summing.  ProTools, from my understanding, still uses a 48 bit integer summing buss, although it may be up to 56 bits by now.  I don't stay current with their technology because I don't use it.  Sorry.  Never been a big fan.  I could elaborate, but that would be a whole other thread of discussion.

So, why do we care about internal bit resolution of the DAW summing buss anyway?  The answer lies in how audio is transmitted from the computer back to your speakers where it becomes REAL audio once again.  What this means is that those digital signals have to be decoded back into analog signals.  And to do that requires a conversion from the digital signal and is the reason why higher bit depths result in higher resolution.  It more closely approximates the original waveform of the recorded audio - BUT - still not 100% exactly the same.  Nyquist says that at a 44.1K sample rate the human ear can't tell the difference.  Well, that's the theory anyway.  However, modern practice in the real world proves otherwise and is part of the reason for migrating away from the mid 80's standard of 16 bits for CD quality audio to the now more ubiquitous 24 bits for high end digital recording.  And I hear 32 bits is just on the horizon.

Now when those digital signals are converted back into analog signals, there is some loss of the original content, albeit very small, that does not get recoded back into the analog.  That's just on the original audio signal.  Now imagine taking that original signal and mixing it with 16 to 24 tracks of other signals.  Now to mix these all together requires the computer to calculate the "summed" result of those mixed signals.  To do so requires processing with the CPU using floating point math, which is an inherently imprecise calculation.  A floating point number is only accurate or precise up to a certain number of bits.  Any part of that value that exceeds the bit precision of the floating point mantissa becomes an approximation.  An approximation, of course, is not an exact measurement and therefore introduces a quantifiable amount of error into the result.  When all of these individual calculations are added together to come up with the final summed result, all of those errors in lost precision become additive.  What you get is an approximation of the summed result that has some tangible, albeit small some would argue, amount of error that does not accurately reflect the true sum of the signals.  Look at it this way.  It's like introducing jitter into the summed result.  Jitter is after all errors introduced in the conversion process as a result of mismatched clocks and in some cases stray capacitance from cables to result in errors in the encoding/decoding process.  For CDs and DVDs the jitter is introduced in the process of reading bits off of a spinning disc that cannot be read in fast enough by either the laser head or the controller streaming bits to the computer.  Jitter in and of itself is not bad per se, but it does result in noise, some of which can be heard or at least perceived.

So in the DAW world increasing the internal summing precision from 32 bits to 64 bits effectively doubles the bandwidth of the summing buss and significantly reduces the amount of errors in potentially introduced through calculations.  Approximations still do occur, but they have been pushed so far out into the mantissa that the theory is they become negligible.  Having said that, if those same signals are summed in the analog domain where everything is just a voltage from each track summed together into a output mix voltage, there will be no approximation errors at all in the summed output.  What goes in is what comes out.  Some will say that the difference between ITB and OTB is negligable now because of technology advances, but I still say let your ears make the ultimate decision.  Even with all the latest advances I still prefer to mix OTB because I truly feel it does sound better.  Fortunately for me, everyone who has tried my product and used it for themselves have come to the same conclusion.
 
For Altitude? Sorry I didn't mean to confuse... but i remain confused myself because I don't see any technical basis for digital combining being inferior, other than perhaps the too many parallel operations to manage. but new computer technology has been going more parallel to get arounnd raw speed issues so maybe the technology is moving in our direction.

If you want to go OTB to use outboard effects and whatever,, Do what works... There are lots of old consoles that need a new friend out there. Or you can roll your own there are many DIY designs published.

For Drueger: Good luck.. there seems to be a lot of interest in OTB so make hay while the sun is shining.

Not sure I follow your digital math, but they should be getting better since they don't have an excuse for not already being near perfect.

JR



 
Hello, I am really no good at this stuff, but lately I´ve been studying the summing cards of an API Discrete (mid 90:ies) console alot. Balanced summing bus, with a relatively simple universal ACA/Booster amplifier with differential inputs, similar to a 325 in parts count.

I Like this console a lot and I am putting a 8-bus summing thing together from spare parts.

If you like, I could try to draw a schematic for you? (No docs for the console).

/Johan G
 
druedger said:
So in the DAW world increasing the internal summing precision from 32 bits to 64 bits effectively doubles the bandwidth of the summing buss and significantly reduces the amount of errors in potentially introduced through calculations.  Approximations still do occur, but they have been pushed so far out into the mantissa that the theory is they become negligible.  Having said that, if those same signals are summed in the analog domain where everything is just a voltage from each track summed together into a output mix voltage, there will be no approximation errors at all in the summed output.  What goes in is what comes out.  Some will say that the difference between ITB and OTB is negligable now because of technology advances, but I still say let your ears make the ultimate decision.  Even with all the latest advances I still prefer to mix OTB because I truly feel it does sound better.  Fortunately for me, everyone who has tried my product and used it for themselves have come to the same conclusion.
Interesting material you're writing. Only 1 question. If 64 bits gives a better bandwidth, why not use 256 bits internal processing? Or maybe even higher, to reduce even more calculation errors. I mean with all of the technical possibilities of today, what is the maximum limit for internal processing?
 
I can smell another D vs A summign debate coming! How about sticking in topic since it is more interesting  ;)

JohnRoberts: I'll search out your article on lossless summing bus, sounds very interesting!

Johan G: would like to know more on your summing stuff too!

Please, continue!
 
Thankyou druedger for your input. It is actually very interesting to see a technical basis for OTB vs ITB.

I'm using Reaper which claims a 64 bit engine, but somehow once you start piling on effects to get that 'hardware' sound things turn to mush and sound pretty one dimensional...

Theory vs reality is always a good one.... ie nyquist vs 96khz...

All I know is simply sending my buses through some line amps willl vastly improve my sound.

JohnRoberts..... no problem.... I've also been down the old console route before... it does sound better, but costs time, money and energy.... all three are in short supply. (not to mention this is a home setup and I don't think the girlfriend would be very happy with a crusty old desk in the front room...)

besides I can fit 16 channels in a 1u space this way and still benefit from external summing....

Johan G FIRE AWAY my man!! Something like this schematic would be most welcome...

I've decided to scrap the 'old patchbay idea' and instead build the summing into the lineamp case.

1u 32 switches on the front....
2x 25 way dsubs on the back for input....
2x xlr output....
possibly vu metering.. or at least an input clip light
maybe a gain trim....
using 2520 DOA circuit
Transformer balanced output

Would love some clarity on.....

running 2520 at 20v?
summing balanced devices into a suitable circuit using the 2520?
what stage to put the metering?

Also the transformer is quadfilar wound

Does this mean I can take 2 outputs from one input?

So perhaps drive a pair of amplifier/speakers with one winding and send the other winding back into DAW for recording the master?

I will draft a schematic....
 
Just read a fantastic document by Fred Forssell about this

http://www.forsselltech.com/downloads/design_discussions/summing_buss.pdf

Looks like we're heading in the right direction by using a the 2520 DOA as an Active Combining Network
 
FWIW active combining, virtual earth, and even the above "lossless" (not) combining amp, are pretty much the same thing.

Variants on those, depending on how complicated you are willing to get, include include differental busing where the remote grounds are bused and brought up to the same sum amp. Another variant similar to that, is to provide a differential audio feed to the master. A third variant is a balanced sum bus which involves two sum amps followed by a differential to single ended converter.

Don't get confused by all the variants.. a simple approach will work just fine for a modest number of channels, and modest physical distance to span.

JR
 
detonator said:
Interesting material you're writing. Only 1 question. If 64 bits gives a better bandwidth, why not use 256 bits internal processing? Or maybe even higher, to reduce even more calculation errors. I mean with all of the technical possibilities of today, what is the maximum limit for internal processing?

The problem has to do with the internal bit depth of the CPU and FPU (floating point) processing core.  Up until very recently most CPUs, at least in the consumer world of Windows and Macs - UNIX boxes have had 64-bit processors for nearly 2 decades, have all been 32-bit processors.  Most instructions and calculations for the CPU are stored in registers that are 32-bits in length, and as a result the calculations will be constrained by the bit depth of the internal registers.  It's actually a physical limitation of the CPU architecture.  In the mid to late 90s Intel introduced MMX with the first Pentiums which started to address some of the 32-bit addressing limitations, and AMD followed suit with 3DNow and SSE a couple of years later.  By adding additional registers that can store up to 64 bits of data, the stage was set to break the 32 bit barrier. 

What was interesting in this architecture, though, is that the FPU actually used an 80 bit register for floating point calculation.  The problem, of course, was that the software used to run on those CPUs, including the OS itself, was still based on a 32 bit architecture.  So all of the initial DAW applications were built around a 32 bit digital mixing bus because that's what the system architecture and compiler supported at the hardware level.  If you wanted to support a higher bit depth in your application, you had to custom develop your own kernel level driver to perform the calculations by manipulating the CPU registers directly and in effect circumventing the compiler routines altogether.  This is what Creamware did in 1997 with the release of their TripleDat application which took full advantage of the entire 80 bit register of the internal FPU in their audio engine.  They coded it in assembly and interfaced directly with the CPU. 

So to break the 32 bit barrier required fundamental changes in both the hardware layer and software layer.  That's why many applications did not offer 64 bit internal mixing busses even though the CPUs and hardware could support it.  It really did require a complete rewrite of the audio engine for the DAW because the calculation algorithms all had to be ported from 32 bit calculation routines to 64 bit ones.  As a developer I can tell you this is no trivial task.  Even if the engine were successfully migrated and stabilized, you had the bigger problem of interfacing to external plugins, like DirectX and VST which were still using 32 bits.  So without a strategy and a translation layer to convert to/from 32 and 64 bits, bad things could happen with the audio, including digital artifacts, clicks, pops and even application crashes.  Ah, the good old days.  I don't miss them.

As far as the theoretical maximum bit depth that could be supported, it is really a function of the CPU registers.  I suppose you could custom develop an algorithm to go beyond the physical CPU register storage, but that would certainly increase the calculations required to get the result, and in the end could result in an increase in latency, which would result in lower track counts.  So you see everything becomes a tradeoff at some point.  The increased bit depth would come at a price that may not be worth it when compared to the benefit received.  I suppose at some point the increased resolution becomes so great that even trained ears would have a hard time perceiving a difference.  And then it would become unimportant and not worth investing the R&D dollars to solve a problem that has no practical application.

I hope this hasn't been too overly technical, and I suspect I may have lost many of you in the process.  Hopefully you can see that going to higher bit levels of resolution is not without its issues.  Although it may be technically possible from a hardware layer, and I believe the latest generation of CPUs now have internal registers capable of storing 128 and possibly even 256 bits of data, the software will still have to play catch up to take advantage of it.  If interoperability were not an issue, then it could be done much quicker, but unfortunately no application vendor has the luxury of developing in a vacuum, not even Microsoft.  And especially in the audio world where there are literally thousands of plugins that provide crucial processing for effects and such, it's critical that any updates to the audio engine not break access to third party plugins and applications that have not migrated to a higher bit depth resolution.
 
So in the DAW world increasing the internal summing precision from 32 bits to 64 bits effectively doubles the bandwidth of the summing buss.

The number of bits has no influence on the bandwidth (i.e. frequency response), it affects the dynamic range. A properly implemented 32 bit floating-point buss has a dynamic range (probably above 140 dB) in excess of what is reasonably possible with analogue technology. Of course a manufacturer which wants you to sell a hardware box will tell you otherwise. Doesn't change the facts though.

Samuel
 
Some of what is written here is true. Some of it is not. Yes, digital data is 0's and 1's. But in actual practice, it's not that simple. Digital math on 24-bit data is not cut and dry. And the best PC out there today will only calculate what the software program tells it to calculate. There are several different ways to do complex digital computation. Yes, on a chalkboard, there is only one way. But with real registers and buffers and cpu's, there is more than one way. When you sum up 16 individual incredibly complex digital sine wave signals at 24-bit word length at 96,000 samples per second, that's some serious arithmetic. You can argue all you want. The differences arise in how the software handles the remainders of all these calculations (and other artifacts too). I can chat with my software engineer in the next cubicle and get you 18 hours of explanation on this if you want. My recording engineer and I have compared several of the popular DAW software programs. Nuendo, Samplitude and a few others. There are audible differences in the end result of summing the same data. We mix OTB only. We don't rely on the DAW software for summing. DW.
 
I am not a digital expert but think I understand the basics... 

What I think I do know:
-Word length beyond 24 bits is mainly useful as a large number for addressing very large memory spaces without the need for paging.

-There is utility in more digital resolution on the small end for very LF, high sample rate, (very very tiny) digital filter coefficients, but this is pretty well managed already by floating point math (using separate mantissa and exponent). 

-For accurate audio reproduction 24 bits is 144 dB resolution/dynamic range, which seems more than adequate (IMO).

-Summing digital streams is a simple MAC (multiply accumulate) or FMAC (a variant on how floating point data is rounded), but perhaps not trivial when trying to add tens of digital streams all within one sample period.

=========

IMO the problem (if real) is not about needing more than 24 bit math, but (perhaps) crunching so many parallel streams at once.

I recently saw an article about high end IBM hardware, and the newest big dog processors are getting more and more parallel, I think one was able to handle 16 or 32 digital streams, for standard computing (like 32 pc's all working at the same time).

But even this does not address the is not the problem IMO.. the issue is combing these numerous streams down to one or a few (virtual buses), which sounds like it may benefit from a specialized piece of silicon, if the processor isn't 100x faster than the sample rate (or more).

I suspect the expensive digital consoles may deal with this in some application specific chips. The low end PC based digital systems are stuck using different short cuts to manage the computational bottle neck, limited by standard hardware, for better or worse.

I repeat I am not a digital expert, but haven't heard an explanation yet that makes sense to this old analog dog. 

I remain very interested but still in search of the smoking gun.

JR


 

Latest posts

Back
Top