Classic Ignorance

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
capturing a performance is key. Now although it may distort go into the red or whatever if the performance is there, if the emotion is there then roll with it. I think now days it's more an issue as digital distortion sounds like ass where as tube distortion or even solid state analog distortion can sound more pleasing to ones ear.
 
sr1200 said:
OK, over the past few weeks, i've been listening to a lot of recordings from the late 50's early 60's... This is probably my youth speaking (and i use youth relatively, since i dont feel young) but, I don't get it.  you got some really catchy songs that even by today's standards are GREAT songs... but all my ears focus on is the insane amount of distortion on just about everything I'm listening to.  The vocals on just about every "agressive" (i use that term loosely) Beatles song I've heard sounds like its either hit the tape with the ferocity of a charging bull elephant, or the pre was so far in the red, the clip indicator is still cooling off some 50 years later... and they could never quite straighten the needle on the meter ever again from hitting the stop...  And its not only the beatles.  (which brings me to another discovery... that everyone from that era was just trying to clone that sound without even trying for something different)

So is this the "sound" that everyone is looking for? Distorted, loose performance, noisy?  I understand that the style of music was new and engineers of the day were still experimenting with techniques and pushing new technology to its limits and in that, I can hear it as a great history lesson.  But I just hear it differently I guess, since to me, it wasn't something so radically new.



The Beatles represent a tiny fraction of all the recordings from "50s-60s" and the sonic difference between early Beatles and late Beatles is quite noticeable as well.  I imagine the attitude toward them early on was similar to what other rock n roll acts got - that it was a fad, barely tolerable, or just "total rubbish".  Early Beatles was 63.  Here's a good track from the same year and to my ears it's hard to do anything less than enjoy the big lush vocal sound.  If there are serious critiques to this sound regarding excessive distortion then I'm at a loss.  My ears are arguably corrupted from ignoring a lot of modern music and studio gear though.  ;D  Something like Earth Wind & Fire's 'Serpentine Fire' (late 70's, George Massenburg) is clean for me.

http://www.youtube.com/watch?v=uZ6zVW3V1hc

I do know what you're saying about distorted vocals but I don't think it was purely limited by the technology as there were cleaner recordings being made.  What came out of Nashville from 1960-1970 tend to be good examples.



To my ears some aspects of recordings got worse from the late 60s to mid early 70s due to changes in recording techniques - namely close miking and multitracking many parts.  Something got smaller sounding in exchange for trying to present a more visceral rhythm section.



I think source should be considered too.  Early Stones playing in the same room and probably not as loud as Count Basie's group could go on certain passages would've sounded raunchier because they were raunchier.  If it were recorded the same way with modern set up it would only be a slightly more clean and less noisy version of the same sonic mess.  I think part of the challenge was learning how to use the gear to make the rock bands come across in a better way.  Both the engineers and the bands had to get better to do that.



http://www.youtube.com/watch?v=8IVX8BayOCA&feature=results_video&playnext=1&list=PL8FB4A77C84C68706

Lots of condensers fairly close.  It's not pristine clean but a long ways from the raunchy gritty tones of Exile on Main Street, or the ice pick in the ears guitars of Big Star records of the mid 70s. 


I haven't decided which is worse though, today's overcompression and "Give me some air - I wanna hear that air" engineers, purposely built designer distortion boxes, some disguised as console mic channels, can easily make for some harsh edgy listening.  Why it's come into fad to create a vocal chain that makes the tiniest lip smack and spit sound sizzle right in your face like hot frying egg splatter is interesting.  The hyper extended bandwidth is sometimes a nice tool to have but I sometimes think that the designers of old had a better sensibility of tonal balance and what bandwidth was  considered suitable for good audio.






I dunno about that, groups like the whitestripes and the vines go for that aggressive (shall we say) sound with overloaded everything.  They're doing it as an artistic expression (well that and if you've ever seen an interview with them... you'd probably want to smack them... kinda like a bafoon that piles dirt in a museum and calls it art... i digress)

My take on the Jack White thing is that he's using the excessive distortion for partly for compensation.  He cannot wail like the singers of old he idolizes who often had a distorted sound to their recordings and has become focused on getting that "sound".  Fun to mess around with but futile in achieving the real McCoy.    Nor can he sing like Dexter Romweber who made plenty of cleaner modern recordings in the late 80s.  Yes, it is irritating watching him futz around with a blasting vocal on some old tube tape player.  I just FFWD the DVD to the Jimmy Page segments.  ;D

Elmore's later recordings were overall cleaner and more relatively more "modern" but didn't give quite the same sonic impact that the crunchy ones like this did.  Still, Elmore comes across with plenty of fire and brimstone and would have in a 2012 DAW set up.  Jack White crunches everything into oblivion on the stuff I've heard, and it's just not the same.


http://www.youtube.com/watch?v=INUX-_XGoR4

 
We are talking about the world of fixed gain 40 dB preamps.  Before the advent of high output mics and loud bands, these seldom overloaded.  Let's not forget that RCA introduced the 'preamp' as we somewhat know it today to get the 'new' ribbon mics up to the level that condensers and carbon mics naturally presented to the bus.  Then the ribbon took over for a long long time, and the preamp became a mandatory part of the equation. 

One made gain adjustments after the preamp, not within it's gain structure.  Broadly speaking, preamps were tools to get normal mic levels up to mixing bus level, which was typically -20 dBm.  This is before the time that the mic amp and the bus amp might well be the same thing, and redundant to one another. 

So, the preamp overloaded, or it didn't.  Cut and dried, almost.  If it did overload, it probably only did it on occasional peaks, the sort the band never presents during soundcheck, sitting on the launch pad.  If it was obvious it would overload, the only real option was a front end pad that irrevocably subtracted from potential output level.  Not like today, where you pad an overloading input stage and turn up the NFB based gain loop to compensate. 

Add to that mix the variables in pad theory.  On portable broadcast and other cheaper mixers you find 150 ohm T pads feeding a passive mixing bus, which feeds a single amp stage, so you have the losses occurring at mic level, rather than high level.  Add to that the impedance issues of paralleling a bunch of mics and T pads into a single 150 ohm mic input stage.  Or consider a console with discrete input amps, one mic to one preamp.  What kind of pad might they have used?  Easily a matching pad rather than a bridging pad, which may well change the sound of the mic radically.  If one is near the edge of overload, but thinks it possible to get away without a pad, rather than change the sound of the mic negatively, one will probably take the risk.  Especially with poor monitoring that is far better than what the man at home listens on.  Oh yeah, you probably don't have a pile of inline pads laying around in the first place. 

Lastly, remember they didn't live in a post-modern culture that constantly re-examined the cultural past.  These recordings are now fully out of the context in which they were birthed, disposable goods to make a quick buck.  Move on to tomorrow with the next disposable good. 

 
Some time ago, somewhere I read that engineers of that era were actively chasing the "loud on the dial" sound, i.e. a recording that would sound like it was played on a radio with volume knob on "11", i.e. distorted.

The aim was (from memory) to achieve such "exciting" sound even if the playback radio (or TV-set or ehatever else) was set to low volume, late at night etc.

So, in order to produce such "grabby" tones, slam those pres and everything else.


"Wall of sound" was also a by-product of wanting to achieve bigger sonic impact within certain given headroom constraints.

Iow, the "loudness wars" aren't anything new. Overdriven signal path was the "loudness" war of that era - and, surprise, surprise, of the DAW era.

History repeats itself.
 
You read just as many complaints from 1960's rock bands who wanted to push things hot, but the engineers wouldn't let anything get into the red. 
 
lassoharp said:
Nor can he (Jack White) sing like Dexter Romweber who made plenty of cleaner modern recordings in the late 80s.

I got to mix Dex and Crow (Flat Duo Jets) a bunch of times back during my tenure at the ol' club in Hoboken (where I will be this weekend, back home!). He's one craaaaaazy dude but man those shows were great.

Plus, also, too, his sister Sara was in THE GREAT Let's Active, led of course by Mitch Easter, who made records that were the soundtrack to my junior and senior years in high school and all throughout college. (I remember going to Pier Platters in Hoboken every day for like three weeks, asking Bill or whoever was working, "Are the tickets for Let's Active at Maxwell's on sale yet????" I did get ticket #1. I was such a geek.

-a
 
I mixed the Flat Duo Jets once around 1990.  Don't remember what the console was, but I do remember every single input clip LED was lit up most of the time, with the preamp gain as low as it would go.  I may have mixed the final Let's Active show at a girls boarding school near DC, about the same time.  I think they were already done, and fulfilling final obligations.  Lassoharp and I have both made records with Mitch.  I don't remember anything clipping then; for me it was still in the era of squeeky clean. 
 
Heard some record of the Flat Duo jets on cassette tape back then, listened to that tape hundreds of times, I particulary loved the sonics of that tape, went very well with the performance and the songs.
I guess digital is too much of a lense for things like distortion, it surely helped when it was partly buried in the noise floor of records and tapes.
 
Plus, also, too, his sister Sara was in THE GREAT Let's Active, led of course by Mitch Easter, who made records that were the soundtrack to my junior and senior years in high school and all throughout college. (I remember going to Pier Platters in Hoboken every day for like three weeks, asking Bill or whoever was working, "Are the tickets for Let's Active at Maxwell's on sale yet?" I did get ticket #1. I was such a geek.


Sara's pretty awesome too and kicks ass on the drum kit.  I like the new Duo stuff.


http://www.youtube.com/watch?v=xKd5GoAqqp4&feature=related


http://www.youtube.com/watch?v=DNXcbvKiR7s&feature=related


Ah, Let's Active.  Unfortunately I never got to see them live, only the few odd videos that popped up on MTV.

 
Don't know about you guys, but i LOVE the sound of "Here Comes The Sun", "Us And Them" and "Gimme Shelter".

Best sounding tracks i've ever heard...give or take some.
 
emrr said:
We are talking about the world of fixed gain 40 dB preamps.  Before the advent of high output mics and loud bands, these seldom overloaded.  Let's not forget that RCA introduced the 'preamp' as we somewhat know it today to get the 'new' ribbon mics up to the level that condensers and carbon mics naturally presented to the bus.  Then the ribbon took over for a long long time, and the preamp became a mandatory part of the equation. 

One made gain adjustments after the preamp, not within it's gain structure.  Broadly speaking, preamps were tools to get normal mic levels up to mixing bus level, which was typically -20 dBm.  This is before the time that the mic amp and the bus amp might well be the same thing, and redundant to one another. 

So, the preamp overloaded, or it didn't.  Cut and dried, almost.  If it did overload, it probably only did it on occasional peaks, the sort the band never presents during soundcheck, sitting on the launch pad.  If it was obvious it would overload, the only real option was a front end pad that irrevocably subtracted from potential output level.  Not like today, where you pad an overloading input stage and turn up the NFB based gain loop to compensate. 

Add to that mix the variables in pad theory.  On portable broadcast and other cheaper mixers you find 150 ohm T pads feeding a passive mixing bus, which feeds a single amp stage, so you have the losses occurring at mic level, rather than high level.  Add to that the impedance issues of paralleling a bunch of mics and T pads into a single 150 ohm mic input stage.  Or consider a console with discrete input amps, one mic to one preamp.  What kind of pad might they have used?  Easily a matching pad rather than a bridging pad, which may well change the sound of the mic radically.  If one is near the edge of overload, but thinks it possible to get away without a pad, rather than change the sound of the mic negatively, one will probably take the risk.  Especially with poor monitoring that is far better than what the man at home listens on.  Oh yeah, you probably don't have a pile of inline pads laying around in the first place. 

This is very interesting to me, not least because I recently dove in to the world of fixed gain preamps. I have 6 channels each going into a 600 ohm variable attenuator and then into my A to D. So far so good, they work great with dynamic mics and with condensers on quiet sources, but in the past with one of my mic amps I have run into clipping in the unit when I used a Blue condenser with a very hot output. I have one in line attenuator, but was thinking about finding a way to have several more without going the in line route.

With that in mind, could I have your opinion on what the best sounding pad design would be for between a condenser mic and an old preamp (with input transformer and I think a 600 ohm input impedance) ? I suppose I could use either fixed -20 db with switches, or maybe -20 and -10 if it wouln't be too hard. And, are there any issues with running phantom or T - power through outboard pads ?

It would be great if someone was making 8 channel phantom power supplies with switchable pads. Maybe I can get something custom made. T power as an option for channels 1 and 2 would be on my wish list as well... then I woudln't have to bring so much stuff when I record elsewhere.
 
For mics they should be wired generally 150/200 ohm input, whatever they have, rather than 600.  600 will usually give an incorrect response when driven with the lower source Z.  As has been explained a million times over, the 20 dB U pad is the only thing that makes any real technical sense, and there's no magic bullet workaround to avoid it.  If you make a pad less than 20 dB, you load most mic types excessively, with either earlier mic overload (condenser) or loss of high frequency (dynamic), or both.  I record full blast bands, and it's rare I don't have a 20 dB pad in front of old 40 dB preamps, even with ribbons.  If I put condenser that has a pad into a pre like this, I might end up with the mic -10 or -20 pad on and a 20 dB pad at the preamp.  Phantom needs to be between mic and pad, otherwise the pad resistors limit phantom current. 
 
thanks Doug. I see a guy on ebay selling in line 20 db U pads built into the XLR connector rather than a short tube for cheap, so I think I will get a few of those for now, and for the next rack project I'll see if I can get someone to build me a control box for the modules that has input - phantom and T power - pad - preamp and then accepts the output back into the unit to run it through a 600/600 rotary attenuator on each channel.

I'm probably mistaken about input impedance on my units, they are 1108s and 9470As which I just had racked up.  As you know they can be strapped in various ways including 150 ohms and whatever they are now sounds more than right to me.
 
re my previous post, I checked, they are all strapped for 600 ohm input as I thought originally.

I saw an old post on another forum which stated that this is a nominal figure for what is expected there and it's actually higher (on the 1108 specifically). Dunno whether that is true; at any rate, they sound good to me.
 
It is true that the secondary is unloaded, so reflected Z to the primary is higher than 600, but, the transformer wants at least one side see something close to rated (load or source) and 150 driving 600 tends to give most unloaded input iron a smiley face EQ, a resonant hump at the bottom and the top.  In some transformers response totally falls apart.  You are also losing about 4 dB gain by strapping 600 input. 
 
interesting. Is it safe to assume that the same would be true of the 9470as, ie I'd be better off for dynamic and ribbon mics to strap to 150 ? I do like flat mids. I might have a freq response reading from my tech, but I'm guessing it probably would have been done with a line signal to the input.
 
So a quick question regarding what you guys are talking about.  I have a tube pre that has an impedance select switch (500 and 2k) i've been having an issue getting my ribbon mic to sound, "better" with it (sounds way better through one of my neve clones) could this be the reason for the drastic tonal shift?  I definitely hear a tonal shift when going between the two preamps (more so than other mics with the same A/B pre scenario) the mic is rated as Output Impedance: 300 Ohms @ 1K (nominal) and Rated Load Impedance: >1500 Ohms @ 300 Ohms.  the neve has an impedance of either 1.2K or 300 Ohms.
 
It's related, but different, and misleading at face value. 

Most modern preamps (including the Neve, I think) have resistively loaded secondaries, and the switch means what it says.  By the book, following bridging theory religiously, you'd use 50 ohm dynamic mics with the 500 setting and 150 ohm mics with the 2K setting.  Or use the 500 setting with 150 ohm dynamic mics when you want to roll off the highs a bit. 

Almost everything past a certain vintage refers to 'expected source Z', and has an unloaded secondary, where no set Z is reflected back to the mic.  The Z in an input like this is more analogous to a speaker Z chart; changes radically with frequency, and thus loads the mic differently across the spectrum.  Ribbons and some dynamics can be very sensitive to this, and vintage ribbons are designed with this in mind. 

Add to the confusion there that an RCA 44 and 77 have different loading theories behind them, I may be backwards, but think it's the 44 is designed for a loaded (matched) condition and a 77 for open (bridging) condition.  It's addressed in the text of the RCA mic catalogs. 

Some people love to ask how a preamp can make any difference, and I say they have never listened to a ribbon back to back through a modern transformerless preamp and a transformer coupled preamp with unloaded secondary. 


 
I have a Beyer M-500 modern re-issue and it often gives a very colored response when used with any of my vintage tube pres having unloaded sec.  It behaves very differently on  vintage Neve pres and console transformerless, IC based pres.  Nothing much you can do about it as is though.  Trying Marik's upgrade packages may be worth checking into.
 
Back
Top