Line Reciever with a Pad Questions.

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

phishman13

Well-known member
Joined
Nov 17, 2008
Messages
283
Location
Cincinnati, OH
Hello everyone.

I am attempting to do a little R&D for a simple mixer type thing.  For recording, it would be used as basically a collection of channel strips with direct outs with routing for a cue mix.  For mixing, it will be used as a normal mixer.

My question is about the input circuitry for the channel.  

I want a simple line input with a pad, polarity reverse, and a line trim.  I would like to use the THAT 1200 line reciever for this situation.  Since the input will be fed from either a mic preamp or an audio interface, what are the odds that I will ever need a pad? (even for killing a preamp for distortion purposes and then cut the level to not kill my converters, since i will probably never need a pad for an output from my interface)  And if I do want one, will I want it to be before the THAT1200, or after?  

As of now, the order I have come up with is

TRS--->PAD--->Polarity Reverse---->THAT1200---->Line Trim of some sort.

Does that make sense?


And the next question.

I don't really mind how much the pad attenuates as long as it is inbetween 12 and 20db.  I have tried using the H pad calculator here:
http://www.mcsquared.com/dbframe.htm

I would probably want an input Z of 10k or higher correct?  And what about output Z to feed the 1200 (assuming it will follow the pad).

The values that I tried didn't really get useful until I had a pad of 20db.  the input Z was a minimum of 10K, and that forced the output impedance to end up around 600ohms.  Is that an acceptable value to feed the 1200 with?

I ideally want a pad of closer to 12db, and to make the impedances happy for a line input into the 1200.  SO i guess I would like to know what max output impedence of the pad would still be cool feeding the 1200.  And just to make sure, is the 10K input impedance high enough for a wide range of sources?

I'd appreciate any guidance in this matter.
 
Great!  Thanks for the insight.  That answers that question.  I guess I'm just thinking along the lines of a traditional mixer which does have the pad at the mic/line input.

Now what about if I feed the output of the 1200 into a gain stage that would be a line trim.  Such as a +/- 12dB sort of gain control.  If a signal at near the max of +26dBu is fed into an ic opamp gain stage, there might need to be one there then wouldn't there?  From what I can calculate, 26dbu is something like 40+ volts p-p?  I was looking at some datasheets (5534, opa604) just to see what the max input voltage was and I guess I just don't know what I'm looking for.  The burr-brown stuff can run at a higher supply voltage so does that mean it can handle a greater input voltage?  So if you use that at a supply of +/- 24V then can it accept an input of almost 48V p-p?  Maybe these conclusions are just some "bad habits" that i've formed from my minimal formal electronics education (a minor in college, and the last analog class stopped just before opamps b/c they took too long with the other stuff :mad:).

So if someone could clear some of these things up for me, I think I will be closer to the right path.
 
The 1206 has 6dB built-in attenuation, so a +26dBu signal at the input will come out at +20. If you trim it 12dB down, it will run at +8dBu, which is perfectly manageable with +/-15volt rails. It may just be a little inconvenient. If you think it's a likely repeatable scenario (IMO, it's not), you may increase the trim range to -20/+12. You just have to make sure the topology you use actually attenuates PASSIVELY the signal when you trim the input down. The most common topologies are: a basic potentiometer followed by a make-up amplifier, or a swinging-input stage, where the potentiometer is connected between the + and - inputs of an opamp, effectively attenuating passively and boosting actively.
 
that all makes good sense to me. 

i guess what is holding me up was my preconception that I wanted to put the output of the 1200 into the gain stage that was running at +12db.  Then FOLLOW it with the pot and just make the marking on the panel where 0 db (really -12db) lands.  I think i read somewhere on the forum that its "better" to follow the gain stage with the attenuation for noise purposes, but that may be situational (or i read too deeply into what really was said).  I just don't know! ha.

I think what also could have been meant by that was that the frequency response of an opamp changes slightly at different gain levels, so its more consistent to keep the opamp at a fixed gain and attenuate passively.  Could I do that but just with the pot before the opamp instead of after?  What would be the difference?

oh and btw, thanks a lot for following me along with this so far.  I'M LEARNING! ;D
 
> I think i read somewhere ...that its "better" to follow the gain stage with the attenuation for noise purposes, but that may be situational

Yes. Situational.

You would like to keep signal levels above 0.1V but not much over 1V.

Lower gets you close to noise.

Higher costs more.

The mike-amp must accept signals near 0.001V, so must amplifiy first.

Modern rock-studio mike amps must also accept >1V. This conflicts with the design choices made to accept 0.001V signals cleanly. So we keep a 20dB attuenuator handy.

The output of your mike amp "should" be ~~1V.

If you insist on banging a GE-47 to +37dBm "for flavor", then you will supply a 30dB-40dB pad before you let that too-hot signal near your other gear.

Non-Mike inputs normally expect "line level", but perhaps not any exact level. A hi-fi's line inputs might be fed 0.05V from a weak source, or 10V from an overloading AM tuner. The conventional line input was an amp with 0.05V sensitivity, preceded by a potentiometer which could reduce much larger signals to 0.05V. You can get fairly good S/N at 50mV level, maybe 80dB, maybe a bit better. If you do not have to take "ANY" signal, can special-case the weak "line" signals, 0.2V sensitivity with a pot in front is a good plan.

1V is the "average" level of a LOUD talker on an analog phone line. With some fudge, this became the +8dBm (1.95V in 600r) limit on line level. In older broadcast work, 10dB headroom was considered ample, +18dBm peaks. In recording, where you may hear a clip dozens of times, 14dB headroom is less annoying. Under the conventional +18dBm limit, this leads to the +4dBm level often mentioned today.

+18dBm remained a common peak level. THD was not well controlled, so you wanted at least 100mW (+20dBm) undistorted to get +18dBm clean. With simple circuits, you need around a Watt of amplifier power to make a good 100mW of audio power. Tubes don't get much cheaper below a Watt, get more expensive above a Watt. Transistors can be more efficient, but rise in price above 0.3W. So ~~100mW +18dBm was and still is a good peak level.
 
Back
Top