Filter in vs out -peak level-

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
One additional thing that I played with just today. The T-Racks Stealth Limiter, which I sometimes find useful, provides an infrasonic filter that they describe this way: "Infrasonic Filter: this control engages a 4th order high pass filter with a 22Hz cutoff frequency. This is very useful to get rid of infrasonic and very low frequency rumbles which could put excessive workload on the limiter."

I noticed that with the Infrasonic Filter applied, the gain reduction show a dB or 2 *more* gain reduction than with the filter bypassed. This indicates that the filter is adding level to the signal before limiting - as @amproject initial question about filters stated.

Food for thought....
 
I've attached an image that purports to show the phase and group distortions from a minimum phase EQ (from Izotope's Ozone 10).
It looks like a very high order filter, probably 6th or 8th. It is not very fair to use that as an example of analog filtering, since most HPF's are only 2nd order or sometimes third.
I'm sure they have simplified the graph, but it clearly demonstrates distortions that occur above the knee of a HPF.
Of course here the phase-shift happens over a large range, but it's not an indicator of a common happenstance.
Regardless of the image, I can clearly hear the difference between a vocal that has been high-passed (even well below 100 Hz) as compared to one that has not been high-passed, but has some tonal shaping on the bottom.
I'd be curious to identify what components of the spectrum are affected. It cannot be with a simple null test, because there would be a difference where the filter introuduces phase-shift. I'm not sure FFT can really show it...
AFAIK, frequencies below 40Hz in vocal takes are parasitic.
I think the largest audience for pop music listens on headphones that are generally flat or even exaggerated down to 20 or 30 Hz. Even Apple Air Pods produce a lot of output down to 20 Hz.
I must admit I've never mixed music for headphones, because I hate them (even more IEM's), so I'm not a good judge, although I use them for foldback of course, and for identification of problematic LF.
 
Last edited:
I used the 48dB/Octave slope to show an extreme case, but this type of EQ is used routinely in modern productions, probably thanks to the marketing departments of plugin manufacturers. Even with these exaggerated curves, we can see that the phase and group distortions extend into the midrange frequencies.

I am also curious about what signal components are affected, but I do hear the effect. I have proven that to myself over and over through careful listening tests. I also agree that most of the audio below 40 Hz (and likely below 100 Hz) are not part of a singer's output, but I also know that brute-force HPF changes the quality of the remaining vocal sound.

I also agree with your assessment of headphones, but Apple is reported to have sold over 60 million pairs of Airpods in 2019 (as reported by Bloomberg). So, we have to assume that a large majority of listeners use headphones. Again, you are coming from a live-sound environment, where I'm speaking only of consumer audio distribution.

When I've done live sound mixing and even sometimes in studio recording, I don't worry much about what artifacts the console EQ creates, except gain-stage problems, because there is simply a job to do. When I'm mixing or mastering and I have time to critically compare different techniques and processors these observations come to light.

Have we beaten this topic to death yet? :)
 
So, we have to assume that a large majority of listeners use headphones.
Actually, I see quite often two guys sharing a set, each one an earbud. That doesn't presume well on their degree of appreciation of musical reproduction... :(
Have we beaten this topic to death yet? :)
We'll see; others may chime in...or not.
 
Last edited:
I used the 48dB/Octave slope to show an extreme case, but this type of EQ is used routinely in modern productions, probably thanks to the marketing departments of plugin manufacturers. Even with these exaggerated curves, we can see that the phase and group distortions extend into the midrange frequencies.

I am also curious about what signal components are affected, but I do hear the effect. I have proven that to myself over and over through careful listening tests. I also agree that most of the audio below 40 Hz (and likely below 100 Hz) are not part of a singer's output, but I also know that brute-force HPF changes the quality of the remaining vocal sound.

I also agree with your assessment of headphones, but Apple is reported to have sold over 60 million pairs of Airpods in 2019 (as reported by Bloomberg). So, we have to assume that a large majority of listeners use headphones. Again, you are coming from a live-sound environment, where I'm speaking only of consumer audio distribution.

When I've done live sound mixing and even sometimes in studio recording, I don't worry much about what artifacts the console EQ creates, except gain-stage problems, because there is simply a job to do. When I'm mixing or mastering and I have time to critically compare different techniques and processors these observations come to light.

Have we beaten this topic to death yet? :)
 
I, as a retired "road" and sound engineer, who has been repairing and improving the equipment of recording studios in recent years. (often the improvement reaches 20 dB, if the capsule is not defective) in my opinion, equalizers with a smaller slope cause less unpleasant sound. (Rather, I try to adapt the character of the microphones, microphone transformers, IC sound to the given task. I renew many old condenser microphones, my experience is that the multi-megaohm resistors must be replaced (noise!) As well as the insulation resistance of the capacitors (at the voltage during operation) Also, instead of the AC701 electron tube used in Neumann microphones, I use the Russian military 6S6-V miniature triodes, they "do wonders" for the sound! to help, feel free to write! And "ethernal life"
 
It is fairly well known that lowpassing a wideband signal can cause an increase in level, and the same can be true of a high pass.
Even ignoring phase (which you shouldn't) consider that sin (wt) + 0.2 sin (3wt) has a LOWER peak amplitude then sin (wt), while clearly having more total power, so taking that signal and filtering out the higher frequency component will inherently raise the peak level while dropping the reading on an RMS reading meter.

With real program material that has been thru a modern limiter/clipper the effect is actually more pronounced and a fairly sensitive way to detect this having been done is simply to allpass the audio (Thereby disturbing the phase relationships) and see how much it grows, obviously any minimum phase filtering will also do this to a greater or lesser extent, and it is sometimes not sufficiently appreciated that the phase change extends way outside the region where a filter is seen to obviously be in transition if looking only at the amplitude response.

There is nothing unique to the digital domain here (except that linear phase networks are practical there), this is easily demonstrated in the purely analog domain.
 
It is fairly well known that lowpassing a wideband signal can cause an increase in level, and the same can be true of a high pass.
Do you have a demonstration of it with a HPF?
a fairly sensitive way to detect this having been done is simply to allpass the audio (Thereby disturbing the phase relationships) and see how much it grows
Actually, an APF can also reduce the crest factor; it is a feature of many broadcast processors. The outcome is program and filter dependant.
 
Last edited:
That was a common trick in AM broadcast back in the day, in fact as I recall someone (Gates?) made an entirely passive 600R box that did exactly that. It basically worked by reducing the asymmetry common in the human voice (And also seen in some brass instruments).
Effective in that context, but usually less so with more modern program formats.
 
Now I have another issue with this DDP tool...It's off topic but...
I print 2 time the same project to DDP. Except from song name correction and adding ISRC, no change.
Opening the 2 DDP image and exporting audio in wav (single file for whole album)...the two .wav don't null consistently (sometime yes sometime no...)
EQ and comp/lim should be linear processing ?!?
I'll have to double check why this happen, not that good if I can't trust the tool 😬
 

I check further...the 2 songs that don't null are those where I apply automation...
One on a tilt EQ the other on a high shelv, and it's for the entire song, not only the part where automation lines are not flat.
I won't go further about this, it's off topic, but now I'm little nervous about this software.
I need to trust what I send to my clients and to the plant...
 
Peak levels increasing when high passing on the master bus is common and well understood.

Here's a short video demonstration from Paul Frindle -

The practical answer is to only high pass when absolutely necessary, I will first order low shelves almost always. Linear phase is not the answer IMO, it sounds awful.

Re your null issue it's been shown than most DAW automation is not repeatable from pass to pass, this is likely the cause of your issue.
 

Latest posts

Back
Top