Ignoring 0VU.... urgh...

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
But there is no 0VU in digital. When Pucho said 0VU he meant "old style averaging meter". The gain structure and dynamic range in digital depends on how the math is being done. I think he was pleading for headroom. I'd bet the slammed traicks have a healthy dose of old fashioned cheap analog crunch.

Hopefully analog gear sounds the same regardless of level as long as you are between the noise floor and clipping. It often doesn't. I don't think it's designed in. The gear that sounds the same at all levels is a better design.
 
jtoole said:
Ok - I normally stay out of these 'religious' debates (and I've just been lurking in Studio A lately since most of my free time is going to recording instead of building gear), but I have to jump in because I want to understand these topics with clarity.  With all the knowledge on this forum we should be able to forge a consensus at least on the objective aspects.

Would someone be willing to expand on the notion that the input level DOESN'T matter in _fixed_ point digital systems?  It would seem that in a 16-bit system (as an example for 'simplicity') each 16-bit sample is a signed integer between -32768 and 32767.  If I input a waveform that is half the maximum level of the converter the result will be a digital representation that ranges between -16384 to 16383, or the range of a 15-bit signed integer.  i.e. for each -6dB below FS one loses a single bit of resolution.  Correct me where/if I am wrong, please!  ;D  I'm not saying its audibly significant in all contexts, but it seems unavoidable due to the math and signal representations.  Certainly this effect becomes less relevant at 24bit resolution, as you can go down to -48dBFS and still have the equivalent of a 16-bit resolution signal.
That is exactly the position I have defended in another forum, and I've been snickered about by the Founding Fathers. Although it is almost without consequence (within reason) on the resulting digital audio quality, it is nevertheless true.
Also true is the fact that the analog equipment may not be happy to operate at too high or too low levels.
I can imagine that a floating point system could be different because the magnitude of the represented signal is independent of the precision of the value.  As a result a signal a fraction of full scale can still have the full resolution of a FS signal. 
This is true and confirmed by practice. The same caveats as fixed-point apply, though.
Philosophically I am 147% behind the notion that your EARS and what you hear are the most important metric. On the flip-side, much of this debate is based on math and there should be objective answers to many of these questions.
One of the most frustrating aspects of this subject is that, at one moment or another, someone says: "I use my ears and I know what I hear." Most of the time he says: "I like it better when there's a lot of iron and tape involved and Fender spank, so this must be better than digital; the maths are wrong."
When in doubt in such cases, I bounce two versions, invert one and sum. 
The problem with null test is you know there's a difference or not, but you don't know which one is objectively better. Frequency response, THD measurements may not be the ultimate, but they give objective quality evaluation.
If the bits are the same and you think it sounds different, the price you pay for something probably determines how much you like it.  :D
Also, don't forget to make facts fit with theory, not the contrary  ;D
To the original point about tracks recorded "too loud" for ITB mixing - The only way it seems this could matter is the sonic impact of digital attenuation (distinct from summing).  Summing is addition and therefore causes ZERO degradation until clipping.   Any time you sum in protools you have a AUX or MASTER track that represents the buss output - if it clips, you're clipping (bad) - if not you have lost ZERO information.  On the other hand attenuation/gain is a multiplication, therefore the result of the operation exceeds the input resolution (hence the use of the internal 48-bit buss).  Whether anyone can actually hear the degradation caused by this multiplication and eventual truncation is another story however...  :-\
I have the frequent opportunity of evaluating program going through a variable number of attenuation/gain algorithms. Until now, not one listener has been capable of telling the difference between straight-through, Odb gain and gain followed by identical attenuation. No need to say that measurements confirm.
 
benlindell said:
Isn't the most important part of recording around 0vu the fact that most analog gear is designed to be hanging out around there?

The classic nominal 0VU is probably an artifact of slow average responding meters, where audio paths were routinely designed with 20dB or so of headroom above 0VU. Setting a nominal level around -20 something dB digital FS is six of one, half dozen another compared to old analog paths.

Modern (digital) audio paths, with fast peak responding meters, allow the operator to more concisely prevent saturating the audio path. In the old days operators had to apply some windage to VU meters for the crest factor of different individual sources.

When meters are used to try to establish a rough mix balance, average response is better correlated to apparent loudness than peak responding meters.

JR
 
Gold said:
Hopefully analog gear sounds the same regardless of level as long as you are between the noise floor and clipping. It often doesn't. I don't think it's designed in. The gear that sounds the same at all levels is a better design.

Perhaps in a mastering situation.

In tracking situations many folks pick gear based on it's particular coloration, especially when pushed hard or used in ways the designer never anticipated.

Mark
 
jtoole said:
Would someone be willing to expand on the notion that the input level DOESN'T matter in _fixed_ point digital systems?  It would seem that in a 16-bit system (as an example for 'simplicity') each 16-bit sample is a signed integer between -32768 and 32767.  If I input a waveform that is half the maximum level of the converter the result will be a digital representation that ranges between -16384 to 16383, or the range of a 15-bit signed integer.  i.e. for each -6dB below FS one loses a single bit of resolution.  Correct me where/if I am wrong, please! 

If you input a waveform at -6dBFS, yes, its numeric value ranges from -16384 to 16383. But you still need all sixteen bits to represent that value.

Each additional bit gives you 6.02 dB resolution at the low end.

-a
 
From a data acquisition standpoint, the lower the signal nominal level, the more uncertainty there is in the acquired data.
Remember we are talking in dB, which is log scaling, and acquire data linearly over the 2^16 data points.
If you plot this in Excel, you see that the uncertainty in acquired data increases log-linearly as your signal dB decreases. 

Btw, what is the input / output level from pro soundcards that corresponds to 0 dBFS? +4?
 
Biasrocks said:
especially when pushed hard or used in ways the designer never anticipated.

Yes, that's my point. It's an unintended use. The designer most likely would have wanted it to sound the same at all levels. This is "good design". I always put peanut butter on xlr connections because it sounds great.
 
Gold said:
Yes, that's my point. It's an unintended use. The designer most likely would have wanted it to sound the same at all levels. This is "good design".

Not trying to sidetrack this thread, but...

There's plenty of examples of equipment being used in unintended ways in the recording studio. Breaking the rules sometimes results in great sounds, to me that's all that matters when making a record.

Of course mastering is a different discipline and requires a different set of tools.

I always put peanut butter on xlr connections because it sounds great.

Cool, I always put a pad on the output of my API's because they sound great when pushed hard. :)

Mark
 
When designing studio gear you have to make a philosophical choice between designing a neutral path, or a sound effect. I have designed studio delay line/flangers that were clearly an effect, and studio EQ and other types of processing that my goal was more neutral, letting the knob settings be the only thing that altered the sound.

Of course the equipment operator, can always do whatever it takes, to get a new different sound. That IMO his his or her job, not mine when developing a single link for a larger audio chain with defined tasks.

If every hardware designer makes his single link of the chain sound "special" when it shouldn't sound like anything at all, you risk ending up with way too much foo... from multiple such sonic added artifacts or colorations in a complete chain. 

JR

Note: I am no longer designing studio gear so perhaps the market voted against my philosophy.  :eek:  Take my comments with a grain or two of salt.
 
Andy Peters said:
If you input a waveform at -6dBFS, yes, its numeric value ranges from -16384 to 16383. But you still need all sixteen bits to represent that value.

Just to make sure I understand what you're getting at here:  Is the point about needing sixteen bits because you mean that since the signal is half FS, the additional bit that is "unused" is still required to distinguish it from a true full scale signal (i.e. for the positive going portion, the unused bits are always zero, but that is still relevant)? i.e. 16383 in a 16bit system is half FS, while 16383 in a hypothetical 15bit system is FS?

If so, I can see what you mean but the waveform is still encoded using a lower number of bits and therefore a lower resolution (what 'dmp' referred to as greater uncertainty?) of each individual sample along the waveform than would be possible if it was brought in at a higher level. 

Andy Peters said:
Each additional bit gives you 6.02 dB resolution at the low end.

When you say "low end" do you mean for lower level(amplitude) aspects of the signal?  If so, I've heard this thrown about before but it seems like a misleading concept.  An additional bit of resolution increases the possible 'steps' between any two sample amplitudes at every possible signal level, low or high.  If you just mean at the "low end" of the number, meaning the less 'significant' portion of the value, than I totally agree - each bit added is representing a smaller absolute step in the samples value.

I will reiterate that I am not claiming that people need to shoot for 0dBFS in a 24bit recording - the loss of resolution going in at -6dBFS or -12dBFS is far outweighed by the potential advantages (in my mind one Over in a great take is much worse than losing a bit or two of resolution).  On the other hand, armed with knowledge of digital signal resolution I would say that all other things being equal I'd prefer to capture a waveform at the highest possible SAFE level (within any limitations or aesthetic choices driven by the analog chains output).  I think this becomes particularly true if the overall process may include multiple round-trip conversions.  Likewise for round-trip conversions avoiding overly low DA output levels and low AD input levels seems advantageous when there is a desire to maintain the original waveform resolution.  Obviously once resolution is lost it can never be regained.

Of course it very much depends on the task (track, mix, master) and project (classical, bluegrass, rock) to determine how much alteration of the captured waveform is desired.  Many genres even rely on abusing the 'rules' to a signature sound.  That's the art of it all, which can live quite comfortably along side the science and math too.

dmp said:
Btw, what is the input / output level from pro soundcards that corresponds to 0 dBFS? +4?

Protools HD:
Input Sensitivity: +18dBu / +4dBV = 0dBFS
Output Level: 0dBFS = +18dBu / +4dBV

Cheers all!
"Where's my beer mug smilie!"
 
jtoole said:
Andy Peters said:
If you input a waveform at -6dBFS, yes, its numeric value ranges from -16384 to 16383. But you still need all sixteen bits to represent that value.

Just to make sure I understand what you're getting at here:  Is the point about needing sixteen bits because you mean that since the signal is half FS, the additional bit that is "unused" is still required to distinguish it from a true full scale signal (i.e. for the positive going portion, the unused bits are always zero, but that is still relevant)? i.e. 16383 in a 16bit system is half FS, while 16383 in a hypothetical 15bit system is FS?
He means exactly what he said... it takes all 16 bits to define that signal. The only way you could describe it with 15 bits, is if FS was reduced to half the voltage swing.
If so, I can see what you mean but the waveform is still encoded using a lower number of bits and therefore a lower resolution (what 'dmp' referred to as greater uncertainty?) of each individual sample along the waveform than would be possible if it was brought in at a higher level. 
You are over thinking this...
Andy Peters said:
Each additional bit gives you 6.02 dB resolution at the low end.

When you say "low end" do you mean for lower level(amplitude) aspects of the signal?  If so, I've heard this thrown about before but it seems like a misleading concept.  An additional bit of resolution increases the possible 'steps' between any two sample amplitudes at every possible signal level, low or high.  If you just mean at the "low end" of the number, meaning the less 'significant' portion of the value, than I totally agree - each bit added is representing a smaller absolute step in the samples value.
No matter how we try to think about this. More bits always translates into more steps. It is perfectly reasonable to ASSume that FS is typically scaled to nominal +20 something dBu, limited by power supply voltages, so every additional bit adds 6dB of resolution that shows up in smaller step sizes. 

Indeed these smaller/more step intervals are everywhere, but to think of them in dB terms it is most meaningful presented as X dB below FS or some other nominal 0VU.
I will reiterate that I am not claiming that people need to shoot for 0dBFS in a 24bit recording - the loss of resolution going in at -6dBFS or -12dBFS is far outweighed by the potential advantages (in my mind one Over in a great take is much worse than losing a bit or two of resolution).  On the other hand, armed with knowledge of digital signal resolution I would say that all other things being equal I'd prefer to capture a waveform at the highest possible SAFE level (within any limitations or aesthetic choices driven by the analog chains output).  I think this becomes particularly true if the overall process may include multiple round-trip conversions.  Likewise for round-trip conversions avoiding overly low DA output levels and low AD input levels seems advantageous when there is a desire to maintain the original waveform resolution.  Obviously once resolution is lost it can never be regained.

Of course it very much depends on the task (track, mix, master) and project (classical, bluegrass, rock) to determine how much alteration of the captured waveform is desired.  Many genres even rely on abusing the 'rules' to a signature sound.  That's the art of it all, which can live quite comfortably along side the science and math too.

dmp said:
Btw, what is the input / output level from pro soundcards that corresponds to 0 dBFS? +4?

Protools HD:
Input Sensitivity: +18dBu / +4dBV = 0dBFS
Output Level: 0dBFS = +18dBu / +4dBV

Cheers all!
"Where's my beer mug smilie!"

Modern convertors have step size resolution smaller than their actual "analog" noise floor. Optimal gain structure is set exactly the same way as for analog paths. Avoid clipping from being too loud, and avoid hiss from being too close to the noise floor... 

Quantization noise/distortion is mainly a bad memory for those of us old enough to recall the early low bit digital systems, and a theoretical boogyman for all the rest. The bit resolution smaller than analog noise floor, means low level signals will be dithered by this noise floor and not get quantized into nasty sounding square waves... 

Relax, don't over think this...

JR
 
JohnRoberts said:
Relax, don't over think this...

I think I'm actually quite laid back ;D Not trying to over think it, but I'll take that as the "dead horse" note.  I was just trying to clarify my understanding of digital signals, conversion, and resolution.  As someone who has done a lot of self directed learning it can be helpful to test your understanding against others. 

I appreciate the clarifications and I'll take as your main point that in 24-bit recording the limitations of analog signal noise floor trumps any concerns over resolution.

jt
 
some interesting reading

http://www.tcelectronic.com/media/nielsen_lund_2003_overload.pdf

http://akmedia.digidesign.com/support/docs/48_Bit_Mixer_26688.pdf

 
benlindell said:
Isn't the most important part of recording around 0VU the fact that most analog gear is designed to be hanging out around there?

Analog and digital gear both is designed to accommodate the entire audio signal without too much added noise at the bottom and distortion from whatever. More so with early gear than later modern implementations was the desire to operate around 0VU (using average responding meters) to keep out of the hissy noise floor.  Early digital systems were not only noisy but very intolerant of clipping so the meters were peak responding and relative to full scale (clipping) not some nominal zero level "approximately" 20 dB below clipping.

In the digital domain, all the different boxes will track wrt the FS metering (bits are bits). In the analog world all the media and gear tried to line up wrt 0VU, so headroom from bob to box could vary. .

With modern gear, analog or digital, it's six of one half dozen another... I am a little uncomfortable that there isn't a strict 0VU line on digital meters, so make your own...



JR


 
Manufacturers like to have "standards" unfortunately they all each want a different one !!

There's differences in the digital level readouts between P-Tools / Logic / Nuendo etc
Nothing much you can do, just calibrate for it, record well and don't push the level too high
all good :)
I was never a huge fan of the limitations of tape anyway .... been there done that ....

MM.
 
JohnRoberts said:
benlindell said:
Isn't the most important part of recording around 0vu the fact that most analog gear is designed to be hanging out around there?

The classic nominal 0VU is probably an artifact of slow average responding meters, where audio paths were routinely designed with 20dB or so of headroom above 0VU. Setting a nominal level around -20 something dB digital FS is six of one, half dozen another compared to old analog paths.

Modern (digital) audio paths, with fast peak responding meters, allow the operator to more concisely prevent saturating the audio path. In the old days operators had to apply some windage to VU meters for the crest factor of different individual sources.

When meters are used to try to establish a rough mix balance, average response is better correlated to apparent loudness than peak responding meters.

And that last sentence is why VU meters were developed, and why they still are useful in audio.

Peace,
Paul
 
pstamler said:
JohnRoberts said:
benlindell said:
Isn't the most important part of recording around 0vu the fact that most analog gear is designed to be hanging out around there?

The classic nominal 0VU is probably an artifact of slow average responding meters, where audio paths were routinely designed with 20dB or so of headroom above 0VU. Setting a nominal level around -20 something dB digital FS is six of one, half dozen another compared to old analog paths.

Modern (digital) audio paths, with fast peak responding meters, allow the operator to more concisely prevent saturating the audio path. In the old days operators had to apply some windage to VU meters for the crest factor of different individual sources.

When meters are used to try to establish a rough mix balance, average response is better correlated to apparent loudness than peak responding meters.

And that last sentence is why VU meters were developed, and why they still are useful in audio.

Peace,
Paul

... and that's why I prefer to display both in my simultaneous Peak/VU meters. No more guesswork about headroom, and you get the extra benefit of being able to read Crest Factor directly from the gap between Peak and VU, when you use use evenly spaced  (in dBs) LEDs.

I have built consoles with some 25+ real VU meters in the meter bridge, but for the life of me don't see how a mix engineer can effectively scan across 24 tracks at the same moment and get a good sense of the relative mix. How many VU meters can you read at the same time?  

Even for VU only, side by side vertical LED bar graphs will be much easier to scan across as a group and make relative level comparisons. The outliers will jump out visually, not you should mix by meters anyhow. Meters are good to check your ears and keep it between the ditches. While I don't want to get into the philosophy of mixing... I'm a circuits nerd.

VU meters may have been the best tool for the job a few decades ago, but I stand by my current thinking that they are now an archaic fashion statement, admired more for looks than utility.

Of course opinions vary, (and the customer is always right) but I am more than a casual student of metering audio.

JR

 
I think that the correlation between dBFS and the VU scale is still hugely important. Take a TV production truck for example. Digital console -> maybe old analog transmission truck. Better know where you stand! On a particular truck I work on I need to hang around -20dBFS to be at 0 VU. Do those levels "look" remotely close to nominal on the digital console? No!

So my point was that it is valid to allow a user assignable "nominal" mark on the dBFS scale for their requirements.
 

Latest posts

Back
Top