dBu to 0 dBFS

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I don't agree with some of the assertions in this thread.  The number of bits in digital directly influences the resolution of the signal captured.  It's not just a matter of exceeding the noise floor and everything becomes exact.

For example at -5.12564.......dbfs  at a certain point you can no longer resolve the decimal places.  The resolution or precision depends on the bits.  Now you can model this as the actual amplitude plus some random error / noise signal. Which I think is what people are referring to.  But you can't just throw away the error portion when discussing the captured signal.
 
i dont know if you watched him, but this dude explains pretty well,

https://www.youtube.com/watch?v=cIQ9IXSUzuM

 
actually this older video is even better... but u need to watch the whole thing!

https://www.youtube.com/watch?v=nLEhfieoMq8
 
kambo said:
as for the "better", i am sure u dont use superior  for something a bit better very often... unless its woooow :)
anyway,
You should read HiFi magazines; you would see that their use of superlatives such as paramount, superincumbent or predominant is second-nature to them.  ;D
Seriously, superior can be used, in daily conversation , as "better" or "best". IMO, since English is not my native language either.
 
mattiasNYC said:
Ok, so my point was really this:
actual recorded signal above -86 dbFS is exactly the same in 16 bit and 24 bit.
To which you replied: Indeed.
And I probably shouldn't have; I should have insisted on the fact that you cannot separate the signal from its unwanted components. Why is it that ignoring anything below -86dBfs is acceptable? Probably because we can't hear it. Imagine someone/something with the capability to hear below -120dBfs; he/it would have a much different view and would be right in saying the signal is not well described.


Yet when I tried to make the same point the answer was just "no".
I should have said, as I wrote later: "Again, we see the importance of terminology. The signal is the whole signal, including noise and distortion." It's only the fundamental that's identical in the different versions, it doesn't allow saying the signal is equally preserved.


You wrote: "The sine wave itself is what we care about, and it is defined well. Everything else, stuff that we care about only because it's stuff we don't want, is distortion residual."
  You could say that of any distorted signal. Suppose the result of the conversion of your sinewave was a perfect square-wave of amplitude 4/Pi. Its Fourier decomposition would show a fundamental of amplitude 1. Would you conclude that the signal has been perfectly converted?
 
kambo said:
i dont know if you watched him, but this dude explains pretty well,

https://www.youtube.com/watch?v=cIQ9IXSUzuM

Look at the source of both videos. Xiph.org has a vested interest in audio codecs and using as little bits as possible. They are ok for intro to the topic, but neither are really qualified or understand what goes on in real world systems
 
kambo said:
actually this older video is even better... but u need to watch the whole thing!

https://www.youtube.com/watch?v=nLEhfieoMq8
Well, what he says is true, however he didn't mention how dither replaces distortion with noise. I heard non-dithered recordings, when Bob Adams introduced the first Sigma-Delta converters; I can assure anybody that 8-bit non-dithered is a sonic agression (it was standard in samplers, though).
He just mentioned in passing, at the end of the video, that there are excellent reasons for using 24-bit resolution in the process of creating audio.
I agree that for most domestic reproduction, 16-bit is perfectly adequate, but this is in part due to the fact that the maximum level is close to 0dBfs, making good use of all the bits.
For a number of years, I have produced music using a 16 bit recorder; I had to be much more cautious with levels and knew I couldn't process the signals as much as I do now with 32.
 
perhaps a book like this would explain in depth ;)
40USD
https://www.amazon.com/Digital-Audio-Explained-Engineer/dp/141960001X


Hey Abbey : i am sure you were using 2" - 24 track machines back in the days, u may still be using it,
in your experience, did you needed to be as careful as working with 16bit recorders, or it is/was way easy to work with?
 
abbey road d enfer said:
For a number of years, I have produced music using a 16 bit recorder; I had to be much more cautious with levels and knew I couldn't process the signals as much as I do now with 32.

And that is the crux of the biscuit. If we capture at 24 bits, our quantization noise floor is in all likelihood below the analog noise floor, which means that we can be conservative with input levels.  There is no need to run input levels at the top, risking clipping.

We do use "all of the bits" when converting at -18 dBFS, and we use them all when converting at -30 dBFS and -3 dBFS, too.  The difference between 16-bit conversion and 24-bit conversion is that the noise floor of the former is higher; our canonical 1 kHz sine wave remains the same.
 
abbey road d enfer said:
I should have said, as I wrote later: "Again, we see the importance of terminology. The signal is the whole signal, including noise and distortion." It's only the fundamental that's identical in the different versions, it doesn't allow saying the signal is equally preserved.

Yes, it is indeed an issue of terminology, and you'll have to accept that you're not entirely correct here if that's your argument. Let me just quote the very first sentence on Wikipedia's entry for "Digital Recording":

"In digital recording, audio signals picked up by a microphone or other transducer or video signals picked up by a camera or similar device are converted into a stream of discrete numbers, representing the changes over time in air pressure for audio, and chroma and luminance values for video, then recorded to a storage device."

So if we really want to nit-pick this terminology the word "signal" is used twice, to refer to two different things:

1. The signal which is the analog input.
2. The "signal", which is really just a set of numbers representing #1 after conversion to digital.

Therefore, it's absolutely correct to point out that we're not describing or representing #1 any better at -10dBFS with 24 bits compared to 16 bits, whereas you could make the argument that for #2 we are. But the discussions on this topic typically have to do with recording, and where people get confused is where they think that the signal that we care about, the input signal, is somehow of higher "fidelity" when described using 24 bits rather than 16 bits, when really the practical issue is again the noise/distortion at the very bottom of it all.

Further more, a lot of people would argue that #2 is actually an incorrect usage of the word "signal", and that all those numbers aren't a signal at all, and that the signal is just what happens after we've reconstructed it from those numbers. So there's no real "noise" or anything in the signal in its digital form, even if that's what we end up with once converted back to analog. The numbers are simply "data" which represent a signal. Just as there is criticism against what happens between the samples (i.e. the "stair steps" etc, which actually don't exist) there is criticism against calling our data set a "signal", again, because it isn't, it's just a representation of one.

Of course, I find that to be a bit nit-picky again and fortunately such criticism is mild and infrequent, and I think it's sort of beside the point, but it's worth pointing that out if we are going to nit-pick what I was saying.

-----------

So anyway, I absolutely stand by what I hope I said earlier, which was that: By recording to 24 bits rather than 16 bits we push noise/distortion further down and can get more comfortable when recording, not having to worry about recording too close to the noise in order to avoid clipping, and by recording a lower level signal we're not describing our input signal with any less detail.
 
Andy Peters said:
We do use "all of the bits" when converting at -18 dBFS, and we use them all when converting at -30 dBFS and -3 dBFS, too.  The difference between 16-bit conversion and 24-bit conversion is that the noise floor of the former is higher; our canonical 1 kHz sine wave remains the same.

And I think the above is a good example of where other people get it wrong, as they end up believing that more bits used = more fidelity or definition or whatever - of the input signal we wish to capture.
 
What is fidelity? Most interpret it as accuracy,  the less true it is to the original the less fidelity there is.  I would include noise in this,  but can we at least agree that more distortion means less fidelity?

One misconception is that dither removes all the distortion.  It can make it better at the expense of a higher nose floor.  But even in that noise there is still some signal dependent distortion products.  The only way to truly eliminate it is to go to subtractive dither techniques,  which is not the norm for audio.

So then more bits equals more fidelity.  I don't see how you can argue otherwise,  unless you are using a nonstandard definition of fidelity.

Also stair steps have caused lots of confusion, they can be misinterpreted.  But they do actually exist in certain real life systems.
 
john12ax7 said:
My HiFi runs at 1 7/8 ips  😁

i actually have a reel to reel in one of the boxes, with ton of reels from 70's, full of great music...
i should dig it out... good call  :)
Allman Brothers drummer is goon too btw  :(
 
I just now got back to this, someone said they couldn't get the link I posted (now I see it's ti.com.cn, not ti.com). Someone else apparently uploaded it, but here's the "real" USA link, which should be blocked by fewer sites:
http://www.ti.com/lit/an/snoa232/snoa232.pdf

The point is that a "standard" (old-fashioned, as in R-2R and successive approximation) A/D conversion causes distortion (unwanted signal correlated with the original signal), but if a certain amount of white (or filtered) noise is added to the signal, then the only thing (other than the original signal) in the output is noise (which is uncorrelated with the original signal).  See figures 11 through 14 on page 5 of the app note.

Thus with proper dither, the only difference between A/D converters of different bit depths is the level of noise.

And yes, modern sigma-delta converters do this dithering (and noise shaping) thing inherently as part of the conversion, but too many people think of them as just like the old fashioned ones in that app note.
 
kambo said:
i thought HiFi audio equipments were operating at  16 bit 44.1 khz!
CD's do, but SACD (1-bit 2.8MHz), DVD-A (multiformat up to 24-bit 192k) offer higher resolution; OTOH Music servers use mp3 in addition to 16/44.
What is HiFi? For many people that's the name they give to their sound box.
 
mattiasNYC said:
"In digital recording, audio signals picked up by a microphone or other transducer or video signals picked up by a camera or similar device are converted into a stream of discrete numbers, representing the changes over time in air pressure for audio, and chroma and luminance values for video, then recorded to a storage device."

So if we really want to nit-pick this terminology the word "signal" is used twice, to refer to two different things:

1. The signal which is the analog input.
2. The "signal", which is really just a set of numbers representing #1 after conversion to digital.
There is no contradiction here: one is the "audio signals picked up by a microphone or other transducer", the other is a digital signal. The same word can represent different things as long as something complements its meaning.

Therefore, it's absolutely correct to point out that we're not describing or representing #1 any better at -10dBFS with 24 bits compared to 16 bits, whereas you could make the argument that for #2 we are. But the discussions on this topic typically have to do with recording, and where people get confused is where they think that the signal that we care about, the input signal, is somehow of higher "fidelity" when described using 24 bits rather than 16 bits, when really the practical issue is again the noise/distortion at the very bottom of it all.
I won't continue in this direction, because we have a different view; the fact that a dithered signal is less audibly offending does not make it more "accurate" than undithered. A test instrument, which does not have the limitations of human audition knows that dithering has morphed distortion products into noise.
Fidelity is a human-centric concept; in fact it's all about how much inaccuracy goes unnoticed. Bats could say that cymbals reproduced with a 20kHz BW sound awfully wrong, because they actually produce significant energy up to 50 kHz.

Further more, a lot of people would argue that #2 is actually an incorrect usage of the word "signal", and that all those numbers aren't a signal at all, and that the signal is just what happens after we've reconstructed it from those numbers. So there's no real "noise" or anything in the signal in its digital form, even if that's what we end up with once converted back to analog. The numbers are simply "data" which represent a signal. Just as there is criticism against what happens between the samples (i.e. the "stair steps" etc, which actually don't exist) there is criticism against calling our data set a "signal", again, because it isn't, it's just a representation of one.
Frow the Webster dictionary:
Signal
a :  an object used to transmit or convey information beyond the range of human voice
b :  the sound or image conveyed in telegraphy, telephony, radio, radar, or television
c :  a detectable physical quantity or impulse (as a voltage, current, or magnetic field strength) by which messages or information can be transmitted
A digital signal fits the first definition pretty good IMO.


Of course, I find that to be a bit nit-picky again and fortunately such criticism is mild and infrequent, and I think it's sort of beside the point, but it's worth pointing that out if we are going to nit-pick what I was saying.
You call that nit-picking; I refer to it as scientific rigour.


So anyway, I absolutely stand by what I hope I said earlier, which was that: By recording to 24 bits rather than 16 bits we push noise/distortion further down and can get more comfortable when recording, not having to worry about recording too close to the noise in order to avoid clipping, and by recording a lower level signal we're not describing our input signal with any less detail.
Allow me ro remind that the issue was about resolution, which is a purely mathematic notion, not related to a physical implementation.
 
abbey road d enfer said:
There is no contradiction here: one is the "audio signals picked up by a microphone or other transducer", the other is a digital signal. The same word can represent different things as long as something complements its meaning.

I never said there was a contradiction, I said there was a difference. You just acknowledged that.

abbey road d enfer said:
I won't continue in this direction, because we have a different view; the fact that a dithered signal is less audibly offending does not make it more "accurate" than undithered. A test instrument, which does not have the limitations of human audition knows that dithering has morphed distortion products into noise.

I didn't say that the inaudibility of the byproducts of conversion = more accuracy.

The problem is that you're disregarding the difference I mentioned above, so of course you won't continue in that "direction". But it is nevertheless the correct way of looking at it and it is the way many people who work with converter design and digital audio indeed think of this. There's the input signal and how well that is represented, and then there's the "other stuff" that is created through the process we're discussing. The input signal can be correctly represented as we've all agreed upon, and the noise can be pushed way down. However, you appear to want to have your cake and eat it too in the sense that you'll willingly acknowledge that the (input) signal can be correctly described but that the (converted) signal is not the same.

Either you make a distinction of the word "signal" every time you use it, or you put it in a context in which it makes sense and allow for a distinction that way instead. Or, a third option would be to just not agree that there's a difference between an input signal and a converted signal (and this is all using the term "signal" the way you want to when referring to data rather than, you know, an actual signal).

abbey road d enfer said:
Frow the Webster dictionary:
Signal
a :  an object used to transmit or convey information beyond the range of human voice
b :  the sound or image conveyed in telegraphy, telephony, radio, radar, or television
c :  a detectable physical quantity or impulse (as a voltage, current, or magnetic field strength) by which messages or information can be transmitted
A digital signal fits the first definition pretty good IMO.

I don't see how a string of numbers constitute "an object". Since you claim none of this is meaningless nit-picking I'll just add the definition of "object" also from Webster's:

   

Definition of object

    1
    a : something material that may be perceived by the senses <I see an object in the distance.>b :  something that when viewed stirs a particular emotion (as pity) <Look on the tragic loading of this bed … the object poisons sight; let it be hid. — William Shakespeare>

So there you go, a bunch of numbers in my head or on a piece of paper or in a file on a hard drive do not constitute an object, and since a signal is an object it stands to reason that those numbers aren't a signal. Once sent out through the D/A a signal has been reconstructed however, so they are a representation of signal, but not a signal per se.
 
mattiasNYC said:
I don't see how a string of numbers constitute "an object". Since you claim none of this is meaningless nit-picking I'll just add the definition of "object" also from Webster's:

So there you go, a bunch of numbers in my head or on a piece of paper or in a file on a hard drive do not constitute an object, and since a signal is an object it stands to reason that those numbers aren't a signal. Once sent out through the D/A a signal has been reconstructed however, so they are a representation of signal, but not a signal per se.
Well it seems you're ready to pursue this conversation until you have the final word; it's endless, so I'll stop here, on a last comment:
The complete definition of object, according to the Webster, is:
1a :  something material that may be perceived by the senses <I see an object in the distance.
b :  something that when viewed stirs a particular emotion (as pity) <Look on the tragic loading of this bed … the object poisons sight; let it be hid. — William Shakespeare>

2a :  something mental or physical toward which thought, feeling, or action is directed <an object for study> <the object of my affection> <delicately carved art object
b :  something physical that is perceived by an individual and becomes an agent for psychological identification <The mother is the primary object of the child.>

3a :  the goal or end of an effort or activity :  purpose, objective <Their object is to investigate the matter thoroughly.> <The object of the game is to score the most points.
b :  a cause for attention or concern <Money is no object.>

4:  a thing that forms an element of or constitutes the subject matter of an investigation or science <objects of study>

5a :  a noun or noun equivalent (as a pronoun, gerund, or clause) denoting the goal or result of the action of a verb (as ball in I hit the ball)
b :  a noun or noun equivalent in a prepositional phrase (as table in on the table)

6a :  a data structure in object-oriented programming that can contain functions (see 1function 7) as well as data, variables, and other data structures
b :  a discrete entity (as a window or icon) in computer graphics (see 2graphic 2b) that can be manipulated independently of other such entities


You have selected the one definition that suits your claim, but anyone can see that definitions 4 and 6a are relevant to the case in discussion.
I believe we have disturbed too many electrons for this fruitless dicussion.
 

Latest posts

Back
Top