EQ caps in parallel or series?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

ruffrecords

Well-known member
Joined
Nov 10, 2006
Messages
16,244
Location
Norfolk - UK
For an EQ you often need odd values of capacitance to hit the required resonant frequencies in LC EQ circuits. So it is common to use a pair of caps to get close to the unusual values required. The problem is made worse by having only the E6 series of values to choose from ( and sometimes even less). I have a feeling (but no proof) that you can obtain a greater useful range of values by using two caps in series rather than in parallel. Thoughts?

Cheers

Ian
 
In theory it is probably six of one, half dozen the other, but I have always used capacitors in parallel when assembling odd values.

If we get all esoteric, some of the cap's non-ideal characteristics could be improved by paralleling (ESR,ESL, etc) but not likely significant with quality capacitors.

Finally the parallel configuration, all else equal will take up less PCB real estate for component volume, and with sharp enough pencils (paying for every pF) less cost.  Arguably the series connection could use lower voltage caps but that ASSumes ideal voltage sharing.

So much easier to just throw them in parallel where you can do the math in your head.

Sometimes easy is good... (almost always).

JR
 
Meaning does using parallel versus series help with finding values that fall into the gaps in the quantized common cap values? So, for example, if you have caps 10n, 22n, 47n, 100n, ... and one cap is 22n, is it easier to get to 33n or 16n using para vs series? Well 22n + 10n = 32n which is close. And 1/(1/22n)+(1/100n) = 18n which is equally close but required a cap 10 times the size.

Then again quatized values are more like 10n, 15n, 22n, 33n, 47n, 68n, 100n ... So with the 22n then the adjacent gaps are 27.5n and 18.5n. So 22n + 6.8n = 28.8n which is close and if we use 100n again 1/(1/22n)+(1/100n) = 18n which is also close.

So it doesn't seem like a win.
 
In the UE-100 they parallel'ed the frequency-determining  capacitors. That way they could mix positive- and negative-temperature-coefficient types to get the unit tropicalized in a predictable way :)

Jakob E.
 
I am sure you could tell from my original post I was not sure if one way or the other get a better resolution of results and it is still not clear to me even bnow, The obvious advantage of the parallel method is it is much easier on the brain to work out the values required so I think I will stick that that method. Thanks for all the input.

Cheers

Ian
 
gyraf said:
In the UE-100 they parallel'ed the frequency-determining  capacitors. That way they could mix positive- and negative-temperature-coefficient types to get the unit tropicalized in a predictable way :)

Jakob E.
That would also work with the caps in series. In parallels, for the compensation to work, the cap values must be inversely proportioanl to the tempcos, which puts an additional constraint. In series, its the contrary.
 
Sounded like a perfect job for iterating through lists in Python (with which I've been fiddling), so I slapped something together:

From a list of 25 values (.01 through 100uF range):
    Series combinations: 325
    Parallel combinations: 314


Python source:

E6 = [100,150,220,330,470,680]
E12 = [100,120,150,180,220,270,330,390,470,560,680,820]

capuf = [.01,.015,.022,.033,.047,.068,.1,.15,.22,.33,.47,.68,1.0,1.5,2.2,3.3,4.7,6.8,10.0,15.0,22.0,33.0,47.0,68.0,100.0]


#rc = list()
#fc = list()
snewcap = list()
pnewcap = list()

for cap1 in capuf:
    for cap2 in capuf:
        scap = 1/((1/cap1)+(1/cap2))
        pcap = cap1+cap2

        snewcap.append(scap)
        pnewcap.append(pcap)

#ensure no duplicates via set()
snewcap = set(snewcap)
pnewcap = set(pnewcap)

#for cap in pnewcap:
#    for res in E6:
#        rcv = res * cap
#        rc.append(rcv)
#rc = set(rc)

#for rcv in rc:
#    fc.append(1/rcv)
#fc = set(fc)

print('Capacitor Values used (uF): ')
print(capuf)
# print('E6 Values used (ohms): ')
# print E6
# print('\n')

print 'Series combinations: ' + str(len(snewcap))
print(snewcap)
print('\n')
print 'Parallel combinations: ' + str(len(pnewcap))
print(pnewcap)
# print('\n')
# print 'Fc Values: '+ str(len(fc))
# print(fc)



 
@mattmatta

Many thanks. Python is my favourite language, it is so readable. set() is a function of a list I have not come across before. Very useful.

So it seems the series combinations have it by a whisker, probably because of fewer duplicates, but I think I will keep to the parallel form simply because it is easier to work out in your head.

Cheers

Ian
 
ruffrecords said:
Many thanks. Python is my favourite language, it is so readable. set() is a function of a list I have not come across before. Very useful.

I really don't know much about the set method - just did a quick google search for "python list remove duplicates" and some people were using that and saying it guaranteed no duplicates.  Seemed to work for me, though I didn't really look at the results in much detail.  Not sure what duplicates, if any, come from my  chosen input dataset to begin with.

I've really been enjoying Python so far.  Seems to be very powerful and simple to do a lot of complicated things.  It's worked very well and turns out to be much simpler than I would have feared to do  the kinds of things I intend to do with it, which will be mostly ways to bring up boards or units under test, automate tests, log data, etc.  All the kinds of tools and things a hardware guy wishes he had but nobody else has planned/budgeted to build!

 
It might be more useful to write a prog that prints the top 5 combinations (series or para) that yield the closest result given a target value. Then you use series caps selectively.

My guess is that series might actually get closer to an arbitrary value because, if you look at the distribution of values (and not just the number of values), series is making smaller values so the results are going to be grouped closer whereas parallel results will tend to be spread out. Meaning series may be more likely to fill in the gaps.

UPDATE:

As a developer I feel obligated to write some code. Here it is:

Code:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<body>
<script>

var target = 0.49;

var cvals = [0, 0.1, 0.22, 0.33, 0.47, 0.68, 1, 2.2, 3.3, 4.7, 6.8];

var results = [];

var c1i, c2i;
for (c1i = 0; c1i < cvals.length; c1i++) {
    for (c2i = 0; c2i < cvals.length; c2i++) {
        var c1 = cvals[c1i];
        var c2 = cvals[c2i];
        results.push({t:'para', c1:c1, c2:c2, val:c1 + c2}); 
        results.push({t:'seri', c1:c1, c2:c2, val:1/(1/c1 + 1/c2)});
    }
}

function sortbytarget(a, b) {
    return Math.abs(a.val - target) - Math.abs(b.val - target);
}

results.sort(sortbytarget);

document.writeln('target: ' + target + '<p>');

var ri;
for (ri = 0; ri < 5; ri++) { 
    var r = results[ri];
    document.writeln(r.t + ': ' + r.c1 + ' ' + r.c2 + ' = ' + r.val + '
');
}

</script>
</body>
</html>

Output for target = 0.49:

Code:
target: 0.49

seri: 1 1 = 0.5
para: 0 0.47 = 0.47
para: 0.47 0 = 0.47
seri: 0.68 2.2 = 0.5194444444444445
seri: 2.2 0.68 = 0.5194444444444445

Just put the above in a file called whatever.html and then open it in a browser.

Stepping through target values like 0.34, 0.35, 0.36, ... 0.49 I found that series would win for a while, then it would flip to parallel. So it seems it depends on the target value.
 
Seems to me the advantage of parallel is you are not buying 4X the uFd you need.

With 2% caps now affordable, and pFd trim caps quite cheap, only some very odd (or very small) value would even suggest the series connection.

And if you SPICE, many engines barf on series caps (no DC path to the mid-point).
 
PRR said:
With 2% caps now affordable, and pFd trim caps quite cheap, only some very odd (or very small) value would even suggest the series connection.
I've been wondering all through this thread what accuracy the OP wants. It's good to know "precision" caps are affordable, but I'd still want to MEASURE cap values (as well as any inductors to be used in tuned LC circuits) and use the measured values in a Python program or (what I might use) a spreadsheet to show combinations. Might as well calculate and show the frequencies of the LC combinations as well, as this is what the OP ultimately wants.

And if you SPICE, many engines barf on series caps (no DC path to the mid-point).
A hundred megohm resistor from the midpoint to ground should fix that. Or (for spice power users) make a new cap component with 100meg in parallel, and not need to use an explicit resistor.
 
benb said:
I've been wondering all through this thread what accuracy the OP wants. It's good to know "precision" caps are affordable, but I'd still want to MEASURE cap values (as well as any inductors to be used in tuned LC circuits) and use the measured values in a Python program or (what I might use) a spreadsheet to show combina

Good question - wish I knew the answer. This is for the mastering EQ I am working on. The idea of pairs of caps is to allow the required resonant frequencies to be obtained with a fixed set of inductors.

Acuracy per se is not as important as having the same values in both left and right channels. This means inductors and capacitors will most likely be hand selected so the LC values in each channel are the same.. The actual values of the resonances I think are less important than them being the same in both channels. The Q is only 0.6 so the curves are relatively broad.

Cheers

Ian
 

Latest posts

Back
Top