Sorry I thought I was responding to the other thread haha, and thank you .Very confusing response.
Sorry I thought I was responding to the other thread haha, and thank you .Very confusing response.
I am definitely not crazy about the line inputs just being padded and then fed to the same discrete stage of the pre-amp, neither for the design of that pre. If this is being used solely for line-level signals in a studio (i.e., as an audio interface), I would definitely bypass the whole input stage. I am trying to find where X39 and X40 go to, I am guessing they go to some op-amp inputs just like the ones shown next to these outputs. I don't see any global feedback in that input stage, just the Sziklai diff pair going out, nothing returning; not the best design if you ask me, looks like a low-budget prosumer preamp from the 80s.
Would an OPA1632 be any good as a buffer for driving a 600 ohm device?
struggling to drive my Gyraf Pultec build
The Gyraf schematics I have seen suggested a Lundahl input transformer that worked OK with a 10k terminating load. It seems like it might be easier to just swap the input transformer and resistor to have a higher input impedance
The cut and boost was all wrong as the whole;e circuit (passive) depends on known source and load impedances.
The THD vs output voltage graph has a line for 600 Ohm load, and it looks really reasonable up to around 22 dBu,
Padding an IC based output circuit to give a 600 Ohm 'send' impedance is still not the same as sending from a transformer in the previous piece of gear...never mind what the output of the Pultec is actually feeding. which should be 600 Ohms and posssibly with a reactive element (transformer not just resistive
Chasing 'authenticity' with a complex system that was a product of the specific time and environment is a crazy idea.
When you design your next console you can add as many active line inputs as you want to every strip.I am definitely not crazy about the line inputs just being padded and then fed to the same discrete stage of the pre-amp, neither for the design of that pre. If this is being used solely for line-level signals in a studio (i.e., as an audio interface), I would definitely bypass the whole input stage. I am trying to find where X39 and X40 go to, I am guessing they go to some op-amp inputs just like the ones shown next to these outputs. I don't see any global feedback in that input stage, just the Sziklai diff pair going out, nothing returning; not the best design if you ask me, looks like a low-budget prosumer preamp from the 80s.
Before even thinking about adding some input/output transformers, I would instead fix that input stage first.
Some time ago I bought a Behringer UMC404HD interface to perform some measurements, so I don't know if it sounds ok or not, neither the distortion levels, but, if the input stage is the same as this, I am afraid I made a bad purchase.
This is not a console, John, this is an 8 channel audio interface with a crappy front end. That is basically all the analog portion of it. Bad cheap design.When you design your next console you can add as many active line inputs as you want to every strip.
It is fairly common in modern console design to send the different inputs into the one active gain stage (cost and PS current consumption).
While a simple line input might be slightly higher performance, the net effect on the audio quality of existing line input sources is generally not significant.
JR
|
So much of 'modern' gear is NOT designed and tested to provide a true 600 Ohm output impedance with corresponding voltage headroom to overcome the 6dB loss in level when you actually place a 600 Ohm load on the output. Much of it presumes that the load will be in the order of 10K Ohms or more (previously referred to as a bridging load) where gear in the days of valves with a 600 Ohm output impedance could drive up to 10 or 15 '10K bridging loads before the drive capability (distortion etc) was compromised. People born after the age of valves have not bothered to learn WHY so many audio interfacing standards came about, and the sometimes serious implications. Still, Hi Fi was about 50 to 10KHz bandwidth at 'less than about a couple of percent distortion' Variously 10% then more stringent 1 or 0.1 % as technology improved. The LA2A compressors start to run into significant distortion (by today's terms) when delivering more than +18dBu into a 600 Ohm load unless the output valve is in good condition. Running into 10K (typical convertor or mixing desk input) it is happy to above +24dBu if I remember correctly.
As a handy aside ALL the revered 'Neve' gear is NOT correct. The types of capacitors and other components of the originals have gone (decomposed) so NO modern version can get back to the space and time that existed when the 'classic' recordings were made and then you have to considered that aLL old recording mediums are 'dead' (have moved on and been fitted with new parts simply to make them work at all). True 600 Ohm outputs and inputs are a nightmare to wotrk with because you simply can't just put another 600 Ohm load in parallel because you lose another 3dB (IIRC) so in places where this was a likely requirement you had Distribution amplifiers with designed in capability to actually feed typically up to 10 X 600 Ohm loads. Broadcasters had racks full of distribution amplifiers so essentially unity gain but ability to drive sufficient current into the extra loads NOT wired directly in parallel because each of the 10 outputs has it's own resistors/transformer winding to keep them all separate. the premise being that you could accidentally short any one or more of the 10 outputs and the signal would not be impaired. Even having a 'listen' jack on a patchfield where someone could plug in a pair of what were often 600 Ohm headphones meant a possible compromise that is difficult to circumvent.
Which of course is unrealistic.Shrug...to make it "bullet proof'... then EVERY output HAS to pump out...say... +24 dBM.
My suggestion was based on operators being adequately educated, so they would understand the nedd to perhaps use one such unit at the input and output of concerned device.The max output level to achieve +24 dBu into 600 Ohms but itself presenting an output impedance of 600 Ohms is not realistic with a 1646 because the output stage would have to provide +30dBu (for resistive output impedance build out) or some cunning way to make the near zero output impedance of these chips 'appear' to be 600 Ohms.
True. However, I don't think they need to operate with actual +24dBu headroom to operate correctly.It is the passive EQ units (Pultec type design) that NEED a true 600 Ohm SOURCE impedance to make them perform to the original design specifications.
What happens if you replace r109 & R110 with 10 ohms?It still suffers from the low end roll off issue. Initially I thought it was because I should have used 300 ohm output resisters - but this brought the roll off point up to ~5kHz.
Please could someone explain why this doesn't work? Can it be modified to work?
Yes it does, a Lundahl LL5402 (600 ohm).What happens if you replace r109 & R110 with 10 ohms?
Your G-Pultec has an input xfmr, right? What is it?
Datasheet says: ideally used with mixed feedback drive circuitsYes it does, a Lundahl LL5402 (600 ohm).
I’ll try the lower resistors - my results do suggest that going the other way in value would move the cutoff down.
That sounds like it was designed as an output transformer, not for input.Datasheet says: ideally used with mixed feedback drive circuits
Yes it is an output transformer.That sounds like it was designed as an output transformer, not for input.
Enter your email address to join: