> has unbalanced inputs and outputs at something called -10db?
What does the fact it is unbalanced have to do with this -10db?
You can design an interface any way you like.
We normally spec as: impedance (matching, or source/load); level; and unbalanced/ balanced/ floating.
Telephone convention runs to 100-900Ω, balanced, -10dBm talk, +10dBm shout.
Broadcast convention is typically 150/600Ω, balanced, +8dBm on program VU (implying +18dBm to +24dBm on peaks)
Recording industry convention starts from broadcast custom but typically runs +4dBm program VU for a bit more headroom on the same gear.
With the rise of chips, it was easy to make good audio but hard to reach the old broadcast conventions. +24dBm peaks is a 0.25 Watt power level. All the voltage, currents, powers, and impedances are just past the limits of common chips.
But these conventions derive from long outside lines. Inside a small studio, there is no reason to be gushing 1/4Watt in the wires, nor any reason to be working to 100-900Ω loads. 100ft/30m of cable is 3,000pFd and will drop to about 3,000Ω at the top of the audio band, very different from driving thousands of feet of cable inside a major network broadcast facility or studio-transmitter link.
The hi-fi guys faced the same problem: full Pro-Audio interface specs are expensive, add excess stages and distortions, and solve problems you don't have in a 1-room audio system. Hi-fi interfaces are 0.1V to 1.0V, source impedance 1K or less, load 10K or more, and always unbalanced (ignoring recent fads for balanced operation). Voltage levels were poorly specified until cassette decks all grew Dolby: the standard Dolby chip runs 0.316 Volts at Dolby Zero level (roughly zero VU or the 3%THD point of early cassette tape) and most deck makers just buffered this to the jacks. You can have peak signals 14dB to 16dB above Dolby reference level. In CD playback they set levels so digital full scale comes out about 2VRMS, which with typical program material on a VU-speed meter will read about 0.3V average.
-10dB relative to 1 Volt is 0.316 volts. If you measure program with a slow VU meter, the loud parts are about 0.3V. If you use a peak meter, a full power sine is 2V (2.8V peak).
TASCAM revolutionized low-price multi-track work with this -10dB standard. All line interfaces unbalanced, under-1K output impedance but should not be loaded with less than 10K load. Eliminates a small power amp at every output and a transformer at all inputs and outputs. You can do everything with 741-class chips. (With +/-15V supplies, a 741 can do more than 2.8V peak; but it will be straining to make 10V peaks in the treble. The 2.8V peaks pass easily through modest chips.)
The "-10dB" is referenced to 1 Volt; the "dBm" standard is a power spec and makes no sense in a hi-Z load system.
But this raises a problem: what is the difference between "-10dB" and "+4dBm, 600Ω"? In voltage, it is 12dB (not 14dB) or 1:4, because 0dBm is 0.775V in 600Ω and 0.775V is about -2dB from 1V.
> I have a multitrack recorder that I want to feed from one of DIY mic pres
Mike preamps come in ALL sizes and flavors. However nearly any of then can feed a TASCAM-type "-10dB". The ones with a single op-amp output are inherently TASCAM-like. "Pro" +4dBm 600Ω outputs will easily drive -10dV re:1V into 10KΩ, just don't use as much gain. If they are balanced, take one wire to the unbalanced hot, the other to ground. If that distorts (many balanced outputs don't like one side grounded) let the other hot leg float.
The converse is not always true. While many "600Ω" inputs are really 10K and have enough reserve gain to take -10dB signals well, some passive EQs and some tube-boxes with high-ratio input transformers really are 600Ω. According to TASCAM spec, this is too low. 741-class chips will strain (though often work OK). 5532-class chips will drive 600Ω without much strain.
> DO any of you take the multritrack outputs into the DIY micpres
WHY, when you already have a line-level, would you go back into a mike preamp? Mike preamps solve the specific problem that mikes have weak outputs. When faced with a weak source, the first thing you do is bring it up to some convenient level (line level) where it can't be easily contaminated with noise. Then you never go near mike level again: that would double your noise. As you say: "WHY?"
OK, in the real-world we often do go through a mike preamp. Sometimes we have a weak "line level" source, like a faint recording on a wimpy consumer tape deck. And sometimes we avoid the cost of balanced line inputs (which are not essential in small-studio work), then when we need a balanced line input we have to pad the line down to mike-level and use our excellent mike input.
Recently there is a fad for using mike preamps as distortion boxes. Distortion is useful. But it should not be confused with the precision job of amplification. And I don't believe distortion should be applied in tracking, on the one and only recording of that performance. This isn't direct-to-disk! Capture it clean, take a day off to rest your ears, then apply distortion as an editing/mixing effect same as reverb or EQ. That way if you screw-up you can go back to the clean master and try again with more subtlety. It is too easy to use too much reverb fuzz EQ: it sounds good one day on one speaker and wretched when you sober-up or try it in the car.