Mic preamps with seemingly deep sound stage (depth of field)

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Keep typing that bad word and get your mouth washed out with soap soon.
Must be an auto check / do-not-type list in the forum code???
Sorry for the derail
 
This is an interesting subject. I'm not sure a preamp can fix everything.  My 2 cents however goes back to Tape for an explanation.    Run a Square wave through a Tape recorder and look at the shape of the wave.    The Theory I have heard over the years is that the high frequency lags the low frequency.    Later last generation recorders have phase adjustments to compensate.   

While at a local studio recently,  I was checking out the Clasp system for tape recording into protools.  While listening to input of the recording of a great Trio playing some interesting Jazz, The system then switched to playback of protools from the Otari 2 inch MTR 90 and I instantly enjoyed that sound. 

The Highs set back in the sound field.  While on input the cymbals were up in my face,  They then set back in the sound field post analog tape recording into protools playback.  It sounds like more depth , but I believe its the high frequency lagging the  mid and lows and becoming a more gentle top end sound. 

As far as the mic pre thing.    I do enjoy Grace 801's or 201's for the sound like quality you get from the Gordon Preamp which is a stellar preamp.

My own experience though says by a pair of Schoeps microphones if you want real depth of field.  They beat all the other mics I've used in that area.  I once used a pair to record a 12 piece string section and used very little spot mics because the accuracy was so much better from the Schoeps at the conductor position.  When you pull the mics back in the room the on axis and off axis response stays the same which allow the sound of the instrument to stay focus in the deeper sound field.  I love those mics.    I once used them on a 7ft 4 yamaha piano thru a very cheap mixer and was surprised at how real the piano sound was in spite of the economy mixer peramps.

This topic can go on forever.


 
fazer said:
Run a Square wave through a Tape recorder and look at the shape of the wave.    The Theory I have heard over the years is that the high frequency lags the low frequency.
I seem to recall that, if you play the recording back backwards and re-record it through a similar machine, the phase problem solves itself and the square wave is square again.

David
 
We used to have a bunch of 2" machines at the studio. One particularly detail-oriented engineer requested that we xfer/copy his reels in reverse with "short-cables" directly connecting the machines (nix-on-the-patchbay). I guess we flipped the tapes and did it that way machine-to-machine. I happily obliged with flipping the tapes too.

While there was a scope in the patchbay I didn't print any square-waves and test what you are talking about...
...but what kind of lag are you talking about anyway? I would like to test that.

cheers,
jb

 
Balijon said:
Although digital recording certainly has its merits, it has a fundamental problem that has not yet been resolved after 3 decades: The lower the level, the less bits you have available to describe the wave-patern. Relative distortion increases when level goes down. These factors are most clearly noticeable when you digitally record a (analog!) reverb decay like from a church. Clearly you loose detail in low level details in digital recorded material. If you mix digitally then these effects are enhanced by 'bit chopping' of the low level material when you reduce mix-level of a track. Analog mixing does not change the wave-patern this way and it clearly has more detail in the low level spectrum in my opinion.
Balijon, what you describe is a cr*p digital system.  On a good digital system, distortion doesn't increase as you go down in level; only the S/N gets worse ...  just like a good analog system.

You can test this for yourself with the Sony PCM-F1.  My early sample had a switch to go from 16b to 14b EIAJ.  It so happens that in 16b, the F1 has practically perfect performance.  You can take a sine wave or music 20dB BELOW the noise and still hear a clean (but noisy) sine wave or your music.  The analogue noise just happened to dither the A/D at exactly the right level.  Your reverb decay example is a particularly good test as is piano music.

If you move the switch to 14b, there is insufficient dither and you hear the crackly distortion at low level etc which is symptomatic of a poor digital system.  The signal disappears when it gets to the noise level.

But the 16b PCM-F1 behaves just like a perfect analogue system with 93dB S/N ratio.  I wish I still had a working one.  :'(
_________________

On mike preamps, I'm not sure words like "depth" are appropriate for a multi-track recording.  Was there "depth" in the sound booth?

For a live recording in a good hall, then microphones are the major factor.  Fig-8 & hyper cardioids are better than cardioids.  If you want "depth" but are not interested in accuracy, spaced omnis are appropriate.

My VERY biased preference in the last century was for a Calrec Soundfield Mk4 which has its own preamps.

In this millenium, the CoreSound TetraMic sets the standard.  Something like the Millenia Media, Earthworks (or Metric Halo for a computer card) is appropriate.  ;D
 
Its phase shift at high frequency.  The phase shift delays the high frequency.  There are so many processes in tape to get from input to record to tape to playback.  Its electro mechanical magnetic.  Lots of changes in energy. Lots of alignments to make. 

When I worked in an analog studio in the old days, we would mix to 2track and then make dubs backwards to correct the phase shift.  These would be 7 1/2 inch reel dubs to take or send out.  The top end click on a kick drum in the mix would stay more present.  When recording everything forwards,  The click on the kick would get more lost from every dub pass.

This is only a possible of many answers for subjective impressions of what makes increased depth of field.    When your listening in mono,  It has to be from other factors than left right separation.



 
Balijon said:
Although digital recording certainly has its merits, it has a fundamental problem that has not yet been resolved after 3 decades: The lower the level, the less bitts you have available to describe the wave-patern.

That is so not true.

-a
 
Andy Peters said:
Balijon said:
Although digital recording certainly has its merits, it has a fundamental problem that has not yet been resolved after 3 decades: The lower the level, the less bitts you have available to describe the wave-patern.

That is so not true.

-a

If you have a 16 bit A/D converter with a maximum input level of 5V, it has 65536 fixed steps to measure (0.000076 V/step) as resolution.
If you feed a wave-patern of 5V it uses all 65536 steps to describe it. (effectively all 16 bits)
If you feed a wave-patern of 2.5V it uses 32768 steps to describe it. (effectively 15 bits)
If you feed a wave-patern of 1.25V it uses 16384 steps to describe it. (effectively 14 bits)
If you feed a wave-patern of 0.625V it uses 8192 steps to describe it. (effectively 13 bits)
etc..
The lower the signal level, the less steps are used to describe the wave-patern. (= fewer actual bits used of the available bits in the converter)
So your relative accuracy to describe the wave-patern gets lower if your signal level gets lower. (I call this increase in distortion)
EDIT: every 6dB signal reduction results in 1-bit less resolution being used to describe the wave-patern

So to clearly make a statement that may be flamed, but is technically true:
If I have two converters:
- One 16bit with 5V input
- One 14bit with 5V input
If I feed a 1.25V wave-patern to the 16bit converter it has the same resolution quality as feeding the same wave-patern at 5V to the 14bit converter.

grT
 
there are many sounds available now,

you have to go to AES and hear all the new crap in order to present a good argument now days,

it changes so fast,

still have the tape player in the car but that is mainly so nobody will steal it,

you can't beat vinyl, sorry folks,

that's how you tell the good ears from the bad,

people who can dig what LP's have to offer can hear real good.
get the classics, like the API, the LAngevin AM16, the Neve, maybe a Tele,

 
So to clearly make a statement that may be flamed, but is technically true:
If I have two converters:
- One 16bit with 5V input
- One 14bit with 5V input
If I feed a 1.25V wave-patern to the 16bit converter it has the same resolution quality as feeding the same wave-patern at 5V to the 14bit converter.


When using the 1st & 2nd gen ADATs and DA-88s I could certainly hear the difference in signal quality between lower input levels and higher ones.  When using modern DAW interface at 16 or 24 bit I can't say I hear that same difference in any significant way.  In practical terms what it has amounted to is/was always hitting the ADAT/DA-88 inputs with as high a signal level as possible and keeping signal levels as low as possible in S/N terms into DAW (just to preserve some headroom for mixing ITB).  This may not be a good comparison because the ADAT/DA-88 is using tape and the sound difference to HD is like apples and oranges.  It may be 24 bit vs 16 bit but I tried 24 bit in DAW and compared to 16 bit and stayed at 16 bit because it sounded adequate.

It just seems that trying to go really hot into DAW is not showing the same differences in audio quality that it did on the older systems.  This is just my casual observation based on typical usage.
 
I have just done a quick read thru the topic. Leaving digital & DAW's aside, I did not notice any mention of balanced or push pull preamp circuits, in my experience this seems to hold true for both tube & transistor circuits. Many of the recordings from the 50's & 60's have a depth & punchiness that more modern recordings do not posess; a lot of those recordings were made thru push pull preamps, they were using tubes as they were commoinly available at the time.

I have studied many preamp & headphone amp circuits using this topology. I have a single ended class A headphone amp designed by John Linsley Hood; I also built a push pull amp by the same designer. The latter have a sonic character that is fuller & punchier than the non push pull circuit, it just sounds better. Distortion was not an issue with these circuits & I doubt this difference in sound quality can be shown by measurements. I will ultimately try & compare the THAT corp. 1510 & 1570 preamps to see if I come up with the same result. Will post when I get around to the test.
 
Balijon said:
So to clearly make a statement that may be flamed, but is technically true:
If I have two converters:
- One 16bit with 5V input
- One 14bit with 5V input
If I feed a 1.25V wave-patern to the 16bit converter it has the same resolution quality as feeding the same wave-patern at 5V to the 14bit converter.

That's how a naive theoretical converter works ie. truncation. This is also how integer signals without scaling work on digital formats (fast becoming obsolete even on DSP ships). This idea of truncation is an easy mistake to make when observing AD-conversion on paper and with no actual knowledge of how the conversion works. But it doesn't take into account at all how noise shapes the incoming signal, which is exactly what happens when recording on an analog media as well. But we can have perfect control over this phenomenon on digital side.

Google "dither".

It's not exactly the easiest of topics to grasp but these days even the cheapest of converters employ some kind of dithering schemes. At its simplest form, dither is just the noise floor of the incoming signal.

PS. I also remember the age old "stair step view" of digital signal in school books which doesn't take into account either bandwidth or noise. This is how two important myths were born: the truncation myth you described, and the "more stair-steps equal better audio" like even Motu used to advertise their products. Which conveniently forgets our most important friend Nyquist.
 
OT:

what do you guys think of direct to disc LP's?

any good ones out there?

i thought the Santana double album sucked, which was real an album and a half, wtf, over?

i heard a direct to disc classical guitar LP that had a very 3D sound to it,

or maybe i just got some bad crack that night,  :-*

 
Balijon said:
Andy Peters said:
Balijon said:
Although digital recording certainly has its merits, it has a fundamental problem that has not yet been resolved after 3 decades: The lower the level, the less bitts you have available to describe the wave-patern.

That is so not true.

-a

If you have a 16 bit A/D converter with a maximum input level of 5V, it has 65536 fixed steps to measure (0.000076 V/step) as resolution.
If you feed a wave-patern of 5V it uses all 65536 steps to describe it. (effectively all 16 bits)
If you feed a wave-patern of 2.5V it uses 32768 steps to describe it. (effectively 15 bits)
If you feed a wave-patern of 1.25V it uses 16384 steps to describe it. (effectively 14 bits)
If you feed a wave-patern of 0.625V it uses 8192 steps to describe it. (effectively 13 bits)
etc..
The lower the signal level, the less steps are used to describe the wave-patern. (= fewer actual bits used of the available bits in the converter)
So your relative accuracy to describe the wave-patern gets lower if your signal level gets lower. (I call this increase in distortion)
EDIT: every 6dB signal reduction results in 1-bit less resolution being used to describe the wave-patern

So to clearly make a statement that may be flamed, but is technically true:
If I have two converters:
- One 16bit with 5V input
- One 14bit with 5V input
If I feed a 1.25V wave-patern to the 16bit converter it has the same resolution quality as feeding the same wave-patern at 5V to the 14bit converter.

grT

You do realize that a full-scale sine wave (for example) will have parts of the signal that are very near 0V? Are you saying, then, that that part of the signal isn't coded reasonably?

If your word length is 16 bits, it takes 16 bits to code the sample, regardless of amplitude.

Consider the converter reference voltage to be the same as an analog power-supply rail. Both set the maximum amplitude. If you have a 5V rail, and a 100 mV signal, then by your logic, you're "throwing away" something if the signal isn't swinging to the rail.

The point, which I think you're missing, is that the quantization noise of a modern 24-bit converter is at or below the analog noise floor, so your concerns are probably misplaced.

-a
 
Kingston said:
Balijon said:
So to clearly make a statement that may be flamed, but is technically true:
If I have two converters:
- One 16bit with 5V input
- One 14bit with 5V input
If I feed a 1.25V wave-patern to the 16bit converter it has the same resolution quality as feeding the same wave-patern at 5V to the 14bit converter.

That's how a naive theoretical converter works ie. truncation. This is also how integer signals without scaling work on digital formats (fast becoming obsolete even on DSP ships). This idea of truncation is an easy mistake to make when observing AD-conversion on paper and with no actual knowledge of how the conversion works. But it doesn't take into account at all how noise shapes the incoming signal, which is exactly what happens when recording on an analog media as well. But we can have perfect control over this phenomenon on digital side.

Google "dither".

It's not exactly the easiest of topics to grasp but these days even the cheapest of converters employ some kind of dithering schemes. At its simplest form, dither is just the noise floor of the incoming signal.

PS. I also remember the age old "stair step view" of digital signal in school books which doesn't take into account either bandwidth or noise. This is how two important myths were born: the truncation myth you described, and the "more stair-steps equal better audio" like even Motu used to advertise their products. Which conveniently forgets our most important friend Nyquist.
In order to make a 14-bit signal have as much resolution as a 16-bit signal, the dither amplitude would have to be very large, for it to blur the two LSB's of the 16-bit. If I follow your line of thought, one could make an 8-bit signal as resolvant as a 16-bit... that is mind-challenging to say the least.
Then, one could argue that a 5-bit signal is as resolvant as a 24-bit. Oh! but it's exactly what happens in modern sigma-delta converters. But this is achieved by trading BW for resolution. The 5-bit signal is obtained at a multi-MHz SR, then decimated, filtered and dithered to produce the multi-bit signal.
There is no way a 14-bit signal can be as resolvant as a 16-bit of same SR in normal conditions.
Truncation is not a myth, it's a reality.
Dithering applied to a low-resolution converter allows "unburying" signals when they are of sufficient duration, by averaging several adjacent samples. At a given SR, it is theoretically possible to extract a signal that would be 1 bit below LSB, if its duration is 4 samples, 2 bits below would need 8 samples, and so on. But the S/N ratio of a 14-bit signal will never be that of a 16-bit.
And clearly, whatever the converter and dithering applied, a -12dBfs signal will have 2 significant bits less than a 0dBfs. And a -15dBfs signal will still be expressed as a 16-bit word by a 16-bit converter, but the accuracy will be extremely poor.
This is, obviously, neglecting the fact that linearity may not be perfect, which could make things worse.
This used to be a concern with 16-bit converters, now with 24-bit, one can safely operate at about -20dBfs, without fear of losing much (leaving still 20 significant bits).

And what is wrong with "more stair-steps equal better audio" ? These do exist, athough they are not the reconstructed signal; there is a definite correlation between the steepness of the stair-steps and the quality of the reconstructed signal.
 
Kingston said:
PS. I also remember the age old "stair step view" of digital signal in school books which doesn't take into account either bandwidth or noise. This is how two important myths were born: the truncation myth you described, and the "more stair-steps equal better audio" like even Motu used to advertise their products. Which conveniently forgets our most important friend Nyquist.

Of course the stair-step depiction of samples is utterly wrong. The samples exist only at the sample instant. This is why proper DSP textbooks draw the samples as a point at the top (or bottom) of a vertical line, each line being separated by the sampling period, and the points are not connected.

The point, of course, is that the reconstruction filter fills in the time between the sampling instants.

Don't get me started on DAWs that draw waveforms as connect-the-dots ...

-a
 
Semi of topic, you guys would make a dub in reverse to correct the phase shift? Thats interesting. I will have to do some testing........
 
Andy Peters said:
You do realize that a full-scale sine wave (for example) will have parts of the signal that are very near 0V? Are you saying, then, that that part of the signal isn't coded reasonably?
Yes, of course there are parts close to 0V, they are coded equally, because the steps are 'fixed / linear'.

If your word length is 16 bits, it takes 16 bits to code the sample, regardless of amplitude.
correct, the word-length always stays 16-bits, but the amount of bits used to describe the wave-patern depends on the amplitude of the signal

Consider the converter reference voltage to be the same as an analog power-supply rail. Both set the maximum amplitude. If you have a 5V rail, and a 100 mV signal, then by your logic, you're "throwing away" something if the signal isn't swinging to the rail.
Yep, and here is where analog and digital are different in the way they behave.
Analog does not change the wave-patern when the amplitude is lowered. (only the S/N gets worse)
In a A/D converter the conversion of a lower amplitude wave-patern is described in a lower resolution/accuracy.
With digital tracking I still advice to record as hot as possible, not for the best S/N, but for the best resolution quality conversion.

The point, which I think you're missing, is that the quantization noise of a modern 24-bit converter is at or below the analog noise floor, so your concerns are probably misplaced.
-a
The point I am trying to make is not about noise floor, it is about relative accuracy to describe the wave-patern, which decreases when amplitude is lower.
I agree that with modern 24-bit converters the effect is less dominant, but the generic problem is still there, only a magnitude less.

There have been various initiatives to improve low amplitude signal A/D resolution:
There have been logarithmic A/D converters in the past that are relative more accurate in low amplitude signals than linear converters.
Gain-ranging is a alternative method where converter operation depends on the input level.
The main problem is that you would need a new 'digital-word' format and it is very difficult to change an entire industry.
An interesting approach from Stagetec works with 4 A/D converters connected to 4 different gain stages: http://www.stagetec.com/web/en/audio-technology-products/standalone-converter/concept.html

Don't get me wrong, I am not advocating that we all should go back to tape and Dolby-units, they come with their own set of problems and limitations. Certainly 24bit is a huge step forward, but don't only focus on S/N-ratio and think that dithering will take care of everything. Be aware of the limitations of the technology and use it wisely.

grT
 
a
Balijon said:
Analog does not change the wave-patern when the amplitude is lowered. (only the S/N gets worse)
Not so.

Balijon, if you have a Sony PCM-F1, you can test (listen) this for yourself.  Just send a signal in and out of the F1.  There is no need to record anything as on "source monitor", the signal goes in and out of the A/D & D/A.

I challenge you to find how this in 16b mode is different from a perfect analogue chain of the same bandwidth and 93dB S/N.

You can even record music or sine waves at -113dB fs. ie -20dB below the noise and you will still hear a clean, undistorted though very noisy signal ...  just like a perfect analogue (or digital) chain.

If you want to hear a bad digital chain, just switch it to 14b mode (on my early F1) and you will hear all the problems you describe.

But the F1 in 16b mode is difficult to fault via either digital or analogue standards.

That's not to say all digital recorders & chains are as good as the old F1 in 16b mode.  There are many "24b" processors and recorders with poor dithering and abysmal anti-aliasing filters.  Anti-aliasing filters in most of the expensive DAWs are laughable too.

The current trend to record at zillion kHz, stuff which is meant for CD, means there is a lot of bad CDs out there.

For the theory behind all this, see papers by Lipshitz & Vanderkooy in JAES.  But I urge you to listen for yourself as you have the perfect demonstration of good & bad digital, a Sony PCM-F1
 
Back
Top