mattiasNYC said:
I think it might be worth mentioning that the practical implication of increasing bit depth would actually be analogous to the ceiling remaining where it is but lowering the floor. So the distance between floor and ceiling increases. In digital audio the result is increased dynamic range (i.e. from full signal down to the noise floor).
I understand the point you are making
, but I've never liked the common audio analogy of "lowering the noise floor" with more bits, because even though the end result could be you have better SNR, that's not inherently the case just because we have more bits.
In a 1-bit ADC, we can only convert our analog signal to 2 possibilities ("shelves"/steps/etc):
1 == signal_on_full_blast
0 == signal_off
In a 2-bit ADC, we have 4 possibilities:
11 == signal_on_full_blast
10 == signal_loud
01 == signal_soft
00 == signal_off
In a 3-bit ADC, we have 8 possibilities:
111 == signal_on_full_blast
110 == signal_very_loud
101 == signal_loud
100 == signal_medium
011 == signal_soft
010 == signal_very_soft
001== signal_very_very_soft
000 == signal_off
Having more bits allows us to capture more in-between steps.
These days, with music being crushed to 'signal_on_full_blast'
from beginning to end, it wouldn't matter if we used a 128-bit ADC when all we're only using 1-bit resolution, signal_off and signal_on_full_blast.
But getting back to Kambo, where you decide to set your corresponding analog reference level is up to you, and whomever else you need to work with. In my studio days, we calibrated ADC inputs to -18dbFS as 0VU, that way even if the analog outputs danced above 0VU, you still have the "headroom" of 18dB before clipping.