On Monday, 2 May 2022 at 09:23:10 UTC, Guillaume Piolat wrote:
> Sometimes artifacts sound "good", be it for cultural or "objective" reason.
Yes, this is true. Like, the loudness-competition that lead to excessive use of compression (multiband?) and ducking (to let bass drum through) lead to a sound image that was pumping in and out. I personally find that annoying, but when you see kids driving in the streets playing loud music they seem to favour this "musically bad" sound. I guess they find excitement in it, where I think of it as poor mastering. And I guess in some genres it is now considered bad mastering if you don't use excessive compression.
I believe this loudness-competition and "overproduction" also has affected non-pop genres. If you get the ability to tweak, it is difficult to stop in time... I frequently find the live performances of talented singers on youtube more interesting than their studio recordings, actually.
The french music scene might be different? French "electro" seemed more refined/sophisticated in the sound than many other "similar" genres, but this is only my impression, which could be wrong.
> Many small delays can help a voice "fit in the mix", and spectral leakage in a phase vocoder do just that. So some may want to come through a STFT process just for the sound of leakage, that makes a voice sound "processed" (even without pitch change). Why? Because in a live performance, you would have those delays because of mic leakage.
I hadn't thought of that. Interesting perspective about mics, but a phase vocoder have other challenges related to changing the frequency content. How would you create a glissando from scratch just using inverse FFT, it is not so obvious? How do you tell the difference between a click and a "shhhhhhh" sound? The only difference is in the phase… so not so intuitive in the frequency domain, but very intuitive in the time domain. You don't only get spectral leakage from windowing, you also can get some phasing-artifacts when you manipulate the frequency content. And so on…
But, the audience today is very much accustomed to electronic soundscapes in mainstream music, so sounding "artificial" is not a negative. In the 80s you could see people argue seriously and with a fair amount of contempt that electronic music wasn't real music… That is a big difference!
Maybe similar things are happening in programming. Maybe very young programmers have a completely different view of what programming should be like? I don't know, but I've got a feeling that they would view C as a relic of the past. If we were teens, would we then focus on the GPU and forget about the CPU, or just patching together libraries in Javascript? Javascript is actually quite capable today, so…
> The phase-shift in oversampling? It can make drums sound more processed by delaying the basses, again. To the point people use oversampling for processors that only add minimal aliasing.
I didn't understand this one, do you mean that musicians misunderstand what is causing the effect so that they think that it is caused by the main effect, but instead it caused by the internal delay of the unit? Or did you mean something else?
> Plus in the 2020s, anything with the sound of a popular codec is going to sound "good" because it's the sound of streaming.
I hadn't though of that. I'm not sure if I hear the difference between the original or mp3 when playing other people's music (maybe the hi-hats). I do hear a difference when listening to my own mix (maybe because I've spent so many hours analysing it).