What LUFS do you aim for

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
https://www.loudnesspenalty.com/
This will help to see how it compares to most popular streaming services.

I used that to show to some of my clients that there was no need to Limit the crap out of their music (to make it louder) since streaming platforms would be lowering the volume, most of them don't really care.

So in reality and in practical terms I notice that if I send a Mix or a Master to a client at -14LUFS it will not get approved, I get the reply "it still doesn't sound right", but if I send exactly the same Mix or Master at -9LUFS/-10LUFS it gets approved with praise.
Most clients also don't care when I explain that excessive Limiting to bring it louder destroys part of the sound quality of their music and it's not needed since streaming services will have a lower target of loudness. Exceptions to this are rare and I only had 1 client/producer that understood this and asked me to actually lower the level, which I loved and he keeps bringing me more artists to Master.
 
Hi
I don't yet dive deep in what exactly do each streaming platform regarding normalisation and "penalty"
But there is a drawback with probably equal concerning issue if you provide too low master.
For lowering loudness you -just- turn the volume of the song down, keeping initial dynamic
To increase the loudness of a song already hitting 0dBfs TP, you need to cut things and apply another brick wall limiter...
According to this, It's probably preferable to stay little too hot, to be sure you don't see your song compromised by unknown limiter processing.
Best
Zam
 
you are right Zam, I also think it’s better not depend of some unknown limiter that might be applied by the streaming platform, also that will be different in each platform itself.

but anyway I don’t know any client that would aprove a mix or a master that is under -14LUFS, in reality that will be perceived as a really low level when they compared at home with other commercial releases.
 
A few more points.
- LUFS short term is, afaik, actually undefined by itu so different plugin producers integrate the short term in slightly different way, no big deal, they're more or less in the ball park, but still it's not a standard.
- What's been referred as LUFS is most of the time the full program material: it calculates the file from start to finish. Its a godsend for broadcasts, a little less useful (but still useful) for music.
- Contrary to popular beliefs, Spotify does not normalize its contents. Yes it has an algo for calculating volume, but it's not lufs/itu128 (it's their own and it's close though) but most importantly, it only gets used if a user clicks on the "play songs at the same level", which is off by default and most importantly nobody ever uses. It's also worth noting that it's calculated at the full album level, not at the single song level. So volumes of the songs do still vary a lot on Spotify and nobody actually cares.
- Youtube, Applemusic... they do have normalizing practices, if you ONLY publish/master on one of those platforms, it's a good practice to aim for that level (IMHO). For those mastering engineers out there complaining that it's a mess to master different stuff for different platforms, I always answer that THAT is your actual job, not adding the last high frequency creative boost.
- -23, the European standard for broadcast (isn't USA -24?) was long overdue and, again, a godsend. It's not perfect but it's a pretty good starting point, it will have dialog modulate not too far to what BBC used to suggest back in the old VU days as good dialogue broadcasts practices, which per se is astonishing. Oh and it's actually -23 +-1dB, so you can actually go to -22.1 and still pass emission software. What pisses me off is that if you're wrong, emission will reject it, you have to remix it, but in their software they could just click the damn option and normalize it to -23 themselves. While it's very good for broadcast, it's way too low for music.
- Cinema has its own old way level, they were the first to implement it, it's based on a very smart "play a white noise to calibrate the system then just go by ear after that" that's been working very well for decades. It sucks big time though when you play your movie, mixed for a huge theater, at night on your home setup, so explosions now will wake up the neighbors and dialogue it's way to faint to even hear.
- I believe AES or some sort of institution like that just put out new guidelines for streaming services and my memory suck but I think they were suggesting -16 for streaming?
- so that brings to the actual fact: the most important question is who you're mastering for.

Seen that less and less people listen to CDs, where the loudness war is still stupidly going on somehow, mastering engineers now finally have the luxury to have more headroom and not squash everything to death. The good old "make it sound good" advice has become paramount again. Depending on what genre you mix and master, that could be compressed in many different ways, but just make it sound good without aiming for a lufs target (unless, again, only going on youtube or wanting to loudness fight on CD). Most of the mastering engineers I read these days will compress quite a bit for that punch and glue, and will end up for a ballpark above the -14 youtube suggestion. I'd say anywhere around -12 to -9. But your material might vary a lot and if they don't reach it it's not a problem.
 
- Contrary to popular beliefs, Spotify does not normalize its contents.

I never seen it being a popular belief, neither I ever heard someone saying that.

For those mastering engineers out there complaining that it's a mess to master different stuff for different platforms, I always answer that THAT is your actual job, not adding the last high frequency creative boost.

Mastering involves a lot of different "jobs", one is not more important than the other, so you're overlooking a lot of things in that statement.
Anyway I don't even think people should be doing different Masters for different Online Streaming platforms, , I'm actually against it.
In my opinion there should only be one Master and that's the same for all Online distribution, there's a lot of important and relevant reasons for that.
So saying "No" to making a different master for Spotify than Apple Music is also part of my Job as a Mastering Engineer, but I will not state that "That's my Job" and not the Tonal Balance between tracks


Most of the mastering engineers I read these days will compress quite a bit for that punch and glue, and will end up for a ballpark above the -14 youtube suggestion. I'd say anywhere around -12 to -9.

Most Mastering Engineers I know don't use Compression at all in Mastering, or use it very light, but definitely when compression is used it is not for volume.
I'm pretty sure you are confusing Compression with Peak Limiting, which are two completely different Tools.
 
Some good sounding older peak limiters are not always dealing with true peak and that needs to be addressed in current delivery requirements.
 
Most Mastering Engineers I know don't use Compression at all in Mastering, or use it very light, but definitely when compression is used it is not for volume.
My mastering chain basically consists of three gain stages: setup gain, compression and limiting. Plus EQ and all the other tools I think that needs to be patched in (BTW a limiter is just a compressor set to an infinite ratio, so one could say: I only use compressors).

My compressor stage is not only for glueing but also creates headroom so I can gain it upwards (make up). The limiter is just to clip some of the leftover transients (let through by the slow compressor) making sure we don't go over -1dB but also allows me to gain up a little more. Colouring compressors (fast attack/release) are mainly used during mixing phase. Mind you, a limiter with its fast attack/release and infinite ratio is actually very colouring. I agree that with loud mixes a limiter tends to be able to inflate the loudness the most.

I tend to use my ears and not to aim for any LUFS but I noticed my masters on average are around -14 LUFS (EBU R128 integrated) depending on genre. Classical floats around -18 but the louder (EDM/Metal) genres average from -12 down to -9 LUFS and this is where I'd mainly use limiting (compression is useless with already fairly compressed mixes).

Limiters by definition sound bad, if you do a null test you can hear the distortion that is inserted; it is far from a nice 2nd harmonic. But when it fits the genre and you balance the limiter carefully nobody will notice/complain about the distortion you added and it will actually sound great.
 
Most Mastering Engineers I know don't use Compression at all in Mastering, or use it very light, but definitely when compression is used it is not for volume.
I think it very much depends on what kind of work you do. A lot of pop music comes in super compressed. If you work mostly on pop music I could see rarely using a compressor. I work on a lot of experimental music with very large dynamic range. Compression is essential. I also work with a lot of underground low budget stuff that often comes in needing a bunch of work. Lastly I work with vinyl which is unforgiving. Things you can get away with dynamically in digital will be a distorted mess on vinyl. Compression is essential in these cases. I just about never use compression for level. It's always to fix a specific audio issue
 
I'm pretty sure you are confusing Compression with Peak Limiting, which are two completely different Tools.
I'm pretty sure I don't.

And for the Spotify thing just read right in this thread: "Spotify and YouTube adjust material to a target of -14 LUFS".
 
I'm pretty sure I don't.

It truly doesn't seem like it

And for the Spotify thing just read right in this thread: "Spotify and YouTube adjust material to a target of -14 LUFS".

"Adjusting" volume and "Normalizing" are 2 completely different things,
like a "Compressor" and a "Mastering look-ahead brick-wall Limiter" are also.

just my 2 cents,
have a good time
 
Last edited:
"Adjusting" volume and "Normalizing" are 2 completely different things,
like a "Compressor" and a "Mastering look-ahed brick-wall Limiter" are also.

You are right, I used superficially the term normalize in the general sense, not its specific one. I see you like being very precise. In that case, it's ahead, not ahed. But since you also were among those that published a tab with Spotify numbers, please be aware that Spotify 1) does not "adjust" levels unless you click the option 2) does not use LUFS 3) nobody I know uses that option 4) calculates volumes on a per album and not per song level. Which means that song levels on Spotify are mostly all over the place. Which takes back to what I was advising and what mastering engineers prefer to do nowadays, also given the new headroom availability: master to what sounds good and then distribute it when it sounds good.

FWIW, mastering stemmed from the need to print the (finished) mixes on different supports, like 45s, 33s, tape and so on, so in a sense mastering started as a need to prepare the master for the supports; different supports/platforms, different masters. In that sense, I believe that it's still the job of the mastering engineer: prepare the master for different platforms, but I've heard many mastering engineers specifically complain about this. What I also said was that if you ONLY publish on platforms that normal... ops, sorry, *adjust to a target LUFS, then it's sensible to normal... ops sorry again, *adjust to that target level before uploading.

It truly doesn't seem like it
Not sure from what you inferred I can't differentiate between a compressor and a limiter. It's a bold and not very polite statement, I was never disrespectful towards you. Other mastering engineers below your answer chimed in saying they do use compressors in their signal chain, even in ways you specifically said they have no use for: "for volume", which, by the way, I did not say. I've never seen a mastering engineer without one in the chain, but this doesn't automatically mean that everybody uses it.

I remember this forum as one of the hidden gems of internet, everybody ready to help out (my post above was in that sense: I felt there was some stuff still not clarified by the thread about LUFS) Has this forum now become salty just like reddit or that gear...space thing?
 
FWIW, mastering stemmed from the need to print the (finished) mixes on different supports, like 45s, 33s, tape and so on, so in a sense mastering started as a need to prepare the master for the supports; different supports/platforms, different masters. In that sense, I believe that it's still the job of the mastering engineer: prepare the master for different platforms,
Commonly referred to as "translation". The original meaning of mastering was creating the physical object that was the master record mould; the lacquer master. Sound quality was part of the job. More in the translation sense than in the the make it sound better sense. Mastering consoles through the 1970’s had rudimentary processing choices.

Originally mastering was done at record labels. It was a junior position. It was the responsibility of the mix engineer to delver mixes that translated well on vinyl. If the tape wasn't prepared well it got kicked back. It wasn't fixed in mastering.

Over time it has become more of a post production job.

Real mastering engineers cut lacquers. 😎
 
Other mastering engineers below your answer chimed in saying they do use compressors in their signal chain, even in ways you specifically said they have no use for: "for volume", which, by the way, I did not say.

Paul clearly stated he doesn't ever use compression for volume/level.

Paul also stated, and I can totally agree and it's in line with my findings over the last 15 years of mastering records that Pop/Rock mixes always are delivered to us with a lot of mixbus compression used during the mixing stage.
80% of my Mastering work fall into the POP/Rock/Indie Rock genre so I rarely feel the need to use mixbus compression with those projetcs, so the Compressor is not in the Chain, but it can get quickly into the chain if it's needed for a specific issue.
Most colleagues of mine also don't have any compressor in the chain per se, but all of them have compressors available in the rack, like myself.
It's definitely an important tool to have that can be needed at any time.

You made some broad and general statements, like it was universal, that I don't agree, don't get too sensitive about that, just don't expect to write in a public forum and that everyone will agree with you.
We correct each other all the time around here, we don't agree many times, and we all learn a lot with each other.

Best Regards

Tiago
 
I never paid much attention to LUFS, but the latest version of Audacity includes a LUFS normalisation tool which will set the LUFS of your track to any value you like. So I have been playing around with a few values and I wondered what values folk typically use. I quite like -11dB.

Cheers

Ian
Hi Ian,

I found clear information and suggestions here, (videos and text by Ian Shepherd) ....

https://productionadvice.co.uk/
I uses Waves WLM (Waves Loudness Meter) to check my levels.
For the type of music which I produce, acoustic, detailed, not over compressed or too loud,
-14LUFS, -1db peak (to avoid intermodulation distortion), -10 momentary peak.
This works for me, and is pretty well a summary of the advice which Ian Sheppherd gives on the link above.
 
Hi Ian,

I found clear information and suggestions here, (videos and text by Ian Shepherd) ....

https://productionadvice.co.uk/
I uses Waves WLM (Waves Loudness Meter) to check my levels.
For the type of music which I produce, acoustic, detailed, not over compressed or too loud,
-14LUFS, -1db peak (to avoid intermodulation distortion), -10 momentary peak.
This works for me, and is pretty well a summary of the advice which Ian Sheppherd gives on the link above.
Thanks for the info and link. I will check that out.

I am curious about your statement of -"1dB peak to avoid intermodulation distortion". Can you elaborate on this please?

Cheers

Ian
 
Thanks for the info and link. I will check that out.

I am curious about your statement of -"1dB peak to avoid intermodulation distortion". Can you elaborate on this please?

Cheers

Ian
Hello,

This is mostly in consideration of various encoding procedures used by online streaming platforms.
If you watch Part 6 of the above link - Final Delivery, from about 20 minutes in, Ian Shepherd explains how this works.
 

Latest posts

Back
Top