Help needed with OPA134/DRV134 active DI box

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

MrShhh

Well-known member
Joined
Oct 26, 2004
Messages
98
Location
Northamptonshire, UK
Hi everyone, I was after a super-clean active DI box, to have a different sound from transformer or valve based DI so I built the following using a BB OPA134 buffer stage driving a BB DRV134 line driver:

DI Box Schematic:
http://i985.photobucket.com/albums/ae333/MrShhh_2009/DRV134DIbox.jpg

Photos:
http://i985.photobucket.com/albums/ae333/MrShhh_2009/DSCF0935.jpg
http://i985.photobucket.com/albums/ae333/MrShhh_2009/DSCF0942.jpg

This is the first attempt I've made at designing something myself so I was hoping if people could please have a look and see if there's anything I've done that's not quite right or could be improved.

It sounds pretty good although there's couple of things I noticed when "measuring" it:

1. DRV134 Signal balance ratio: SBR = -20log(Vo/Vi) using this test circuit:
http://i985.photobucket.com/albums/ae333/MrShhh_2009/DRV134SBRTestCircuit.jpg

Vi = 0.775Vrms sinewave (z=50ohm)

My measurements:
Hz.....Vo (mV rms)....SBR (dB)
250..........6...............42.2
1k............5.7............42.7
5k............5.9............42.4
10k..........6.2............41.9
15k..........6.7............41.5
17.5k.......6.9............41.0

The datasheet SBR for the DRV134 is 54dB best, 35dB worst @ 1kHz, so is 42.7dB ok? Looking at my schematic, could this be improved or am I up against the limitations of the DRV134? The figures are much the same for both channels.

2. Leaving out R3 (so the OPA134 become a unity gain buffer) causes the output to become unstable on the scope. The scope has a hard time locking onto the waveform, it looks "messy" & the sound gets a bit duller. Does anyone know what is happening here?

Thanks for your help  :)


 
A schematic of your build might generate more replies. ;)
Your 1st link is 'this image has been moved or deleted'.
From your Vo readings at different frequencies this box is at least not super-linear in the audio range.
 
The datasheet SBR for the DRV134 is 54 dB best, 35 dB worst @ 1kHz, so is 42.7 dB OK?

Well, 42.7 appears to be between 54 and 35, so the part meets its specs. But why in the world do you care about that? I know few if any real world application where this would matter.

Leaving out R3 (so the OPA134 become a unity gain buffer) causes the output to become unstable on the scope.

Why do you want a fraction of a dB gain anyway? There should be no inherent instability without R3, so it is a layout problem. Try a small cap (~10 pF) across R2 to cancel stray capacitance or short R2.

Samuel
 
The input cap value makes no sense. With 4.7uF you get -3dB frequency of 0.03 Hz. Limiting subsonic noise would be a good idea. And a cap with large dimensions also adds noise pickup.

Same thing with the 1000uF elco between the first stage and balanced driver. The driver input impedance is more than 10 kohm. If you absolutely want to use that cap, something like 100uF is more than enough. I would use direct connection since the DC gain of the first stage is close to unity and input is blocked by C1. The DC error caused by op-amp itself must be smaller than caused by the driver.

It would be a good idea to add a resistor between the 1M load and positive input of the op-amp. It protects the op-amp against excessive input voltages (it doesn't affect noise performance in practice).

I wouldn't use tantals for psu filtering (without a fuse).

Samuel already said everything necessary about CMRR and op-amp stability...
 
Why do you have a 4u7 cap at input, setting this hpf @ 0.03Hz ? 100nF seems sufficient, but YMMV.
A 1nF cap across R2 for a 160kHz rolloff might help oscillation (10pF @ 16MHz probably won't). Otherwise, as Samuel already posted, leave out R3 and maybe short out R2.
Why do you have a 1000uF cap at DRV134 input, setting this hpf @ 0.016Hz ? 22uF seems sufficient, but YMMV.
A 22uF/35V in series with 10K shunt to each DRV134 output might help the chip to survive when accidently hit by +48V phantom voltage.

edit: audiox already got it quicker
 
Thanks everyone for your help  :D Here's my reasoning behind things:

Samuel Groner said:
Well, 42.7 appears to be between 54 and 35, so the part meets its specs. But why in the world do you care about that? I know few if any real world application where this would matter.

I was measuring the DRV134 outputs unloaded and found that a 1kHz -6dB test signal was appearing at the outputs as Vo+ = 0.4Vrms & Vo- =0.43Vrms.

When I loaded the outputs by connecting them to my active monitors (Zin = 20kHz), the same signal then appeared as Vo+ = 0.386Vrms & Vo- =0.572Vrms. This seemed a bad thing, as surely the signal balance affects noise rejection? Hence, I investigated it more thoroughly using the test circuit from the datasheet + the OPA134 stage, giving the table in the first post.

Samuel Groner said:
Why do you want a fraction of a dB gain anyway? There should be no inherent instability without R3, so it is a layout problem. Try a small cap (~10 pF) across R2 to cancel stray capacitance or short R2.

I don't want a fraction of a dB gain, 20k for R3 was a value I chose to keep the gain close to unity, it also "cured" the erratic scope reading.  I wanted unity gain through the OPA134, but without R3, I encountered problem (2.) I have tried a 33pF cap across R2, but it didn't make any difference. I will try shorting R2 & the 1nF cap suggested by Harpo and see how that fares.

audiox said:
The input cap value makes no sense. With 4.7uF you get -3dB frequency of 0.03 Hz. Limiting subsonic noise would be a good idea. And a cap with large dimensions also adds noise pickup.

I had a 220nF input cap to start with and wanted to see if there was any audible benefit of making it larger as the 4u7 cap has a lower ESR. The DI box sounds punchier with the 4u7 cap or could I be imagining it? (Bigger = better right? Only kidding!  ;D). Radial's JDI passive uses a 22uF bipolar electrolytic into a 200kohm primary winding.

audiox said:
Same thing with the 1000uF elco between the first stage and balanced driver. The driver input impedance is more than 10 kohm. If you absolutely want to use that cap, something like 100uF is more than enough. I would use direct connection since the DC gain of the first stage is close to unity and input is blocked by C1. The DC error caused by op-amp itself must be smaller than caused by the driver.

I tried direct coupling the OPA134 to the DRV134, as the offset from the OPA134 is tiny and probably, as you say, not worth bothering about, but this does not improve the DRV signal balance in any way.

audiox said:
I wouldn't use tantals for psu filtering (without a fuse).

Both supply rails are fused:
http://i985.photobucket.com/albums/ae333/MrShhh_2009/DSCF0931.jpg


 
Connect the minus pin and the output pin together with a short wire link (voltage follower). Remove all other feedback components. Place a 100n ceramic multilayer capacitor between the op-amp power pins (solder it directly to the IC pins, as short wires as possible). I am very surprised if that doesn't cure the stability problem.

(By the way, you have wrong op-amp pin numbering in your schematic.)

 
Thanks audiox, I've corrected the schematic pin numbering and also shown the OPA134 decoupling network.

Place a 100n ceramic multilayer capacitor between the op-amp power pins (solder it directly to the IC pins, as short wires as possible).

Just making sure, this means 100nF between V+ and V- (pins 4 & 7)? Nothing to do with standard decoupling?






 
When I loaded the outputs by connecting them to my active monitors (Zin = 20kHz), the same signal then appeared as Vo+ = 0.386Vrms & Vo- =0.572Vrms. This seemed a bad thing, as surely the signal balance affects noise rejection?

No, this does not affect CMRR. It is a result from unequal input Z which is typical for simple line receivers. If you want to cure it you'll need to mod the monitors, but there is no practical problem with it.

Samuel
 
Thanks Samuel,

You're probably aware of the Jensen mod for improving electronically balanced inputs:

http://www.jensen-transformers.com/as/as069.pdf

This claims to improve CMRR by a whopping 65dB @ 60Hz And by 20dB at 3kHz. This was what set me thinking that my signal balance was going to cause poor CMRR, since the poorest figures I measured when driving the monitors were in the bass region (probably caused by a 10ohm or so imbalance).


 
This is some kind of brute force, using a transformer in front of a diff amp! Is it necessary? I don't think so, unless you are facing extremely adverse conditions. In a normal situation, you're good with ca. 40dB CMRR. Anyway, remeber that the overall CMRR of the complete connection takes into account the output stage CMRR and CMMR of the input stage it is connected to. And don't forget the cable, if you're picky.
One thing that is often ignored, CMRR has a polarity; although only the value is spec'd, it also has a sign, depending on which signal phase dominates, and it is quite possible that the CMRR of the input may compensate partially the output CMRR.
If you want to predict the overall performance, it's like an error calculation, you have to add the linear values (you just don't add logarithms). Let's say the output CMRR is 40dB, that's 0.01, and the input CMRR is 50dB, that's 0.003, then worst case value is 0.013 => -37.5dB. But there COULD be some cancellation happening, and then the actual value would be 0.007 => -43dB
What I'm aiming that is that there is a possibility to fine tune a SYSTEM, by tweaking either the input or the output stage or the CONNECTION. You can do that by installing a pot across the balanced connection, with the wiper to ground; that allows trimming the mid band CMRR. For HF CMRR, you need a variable cap across one of the outputs to ground.
IMO, this is only a lab experiment, I've never been forced to use such techniques; most cases of severe disturbance caused by CMRR can be found to have their origin in severe defect (missing ground, defective input or output stage, pinched cable.
 
MrShhh said:
Radial's JDI passive uses a 22uF bipolar electrolytic into a 200kohm primary winding.
There is a very different reason for that. The transformer primary is an inductance that resonates with the input cap. The combination is a resonant hi-pass filter; the amplitude of this resonance is quite high, typically 10-20dB. The cut-off frequency needs to be located way below audible frequencies, in order to make sure that the hump does not overlaps audible frequencies.
The Radial xfmr may have an inductance of about 100H; with a 22uF cap, the cut-off is at 3.3Hz and the amplitude may be ca. 10-15dB (depends on the actual load on the low Z side).
If the cut-off frequency had been higher, let's say 10Hz (10Hz sounds reasonable since the lowest frequency of a 5-string bass is 30Hz), this would create a 20dB hump at 11Hz, which would extend to 30Hz.
 
MrShhh said:
Just for completeness, the Radial JDI trafo is a Jensen JT-DB-EPC:
140k input impedance, not 200k (ailing memory).
Doesn't make much difference , since the actual impedance of the xfmr varies with frequency; the same xfmr may be rated at 100k if a different -3dB LF point is chosen in the specs.
There is also a 100nF poly cap shunted with the 22uF bipolar. I've always wondered what this was for??
This is the subject of an endless debate over the influence of ESR and armature inductance on signal transmission and integrity. According to some, this cap is supposed to compensate for the the increase of impedance due to inductive effects, and for others it bypasses the ESR in order to avoid HF attenuation due to the ESR. This debate can be very heated sometime, I don't want to enter the arena, by fear of being excommunicated ;)
I think it is pretty much irrelevant when driving a transformer that has several kOhms DCR and a couple hundred mH leakage inductance...
 
Back
Top