May 02, 2022

On Monday, 2 May 2022 at 11:19:18 UTC, Ola Fosheim Grøstad wrote:

>

I guess they find excitement in it, where I think of it as poor mastering. And I guess in some genres it is now considered bad mastering if you don't use excessive compression.

I don't think there is any real reason to trust one own taste, as taste is socially constructed (cf. La Distinction from Bourdieu) and - simplifying - it reflects too much of your socioeconomic background to be significative. Music in particular particularly reflects that.

>

The french music scene might be different? French "electro" seemed more refined/sophisticated in the sound than many other "similar" genres, but this is only my impression, which could be wrong.

French hiphop was amazing (and is popular) from 2017 to ~2021 but I don't think we have something interesting otherwise. French electro is much less interesting than the Argentinian progressive house scene for example, and that's just my opinion again. A lot of good music gets produced in niches, to get completely ignored nowadays, so it would be hard to say what scene is interesting ; we all get to miss it anyway.

>

I didn't understand this one, do you mean that musicians misunderstand what is causing the effect so that they think that it is caused by the main effect, but instead it caused by the internal delay of the unit? Or did you mean something else?

Oversampling typically produces:
A. a phase shift
B. anti-aliasing

but because aliasing is a tiny problem in dynamics processing in the first place, people choose to use it while hearing only (A). Which can sound good by itself. The by-product becomes more desirable than the non-problem it solves. Now everyone wants the feature!

>

I do hear a difference when listening to my own mix (maybe because I've spent so many hours analysing it).

If a typically polished song is listened as MP3, then MP3 becomes the norm.
And then what-everyone-else-is-doing sounds sincerely better to our ears. A process you could call "legitimation".

I had a strange conversation about Autotune once with a 20 years old:

  • an heavily autotuned voice sounded "normal" and not-autotuned to her
  • but the talkbox in Kavinsky - Nightcall sounded ugly to her and "autotuned". She mentionned of course she didn't like the Autotune. But was unable to identify it in practice.
May 02, 2022

On Monday, 2 May 2022 at 07:39:29 UTC, bauss wrote:

>

On Sunday, 1 May 2022 at 16:31:41 UTC, claptrap wrote:

>

On Sunday, 1 May 2022 at 15:50:17 UTC, Guillaume Piolat wrote:

>

On Sunday, 1 May 2022 at 14:36:12 UTC, Ola Fosheim Grøstad

>

Autotune and vocal mixing are two different things, albide the normal population don't know the difference and think it's the same.

A lot of people mistake vocal mixing for autotune, when it really isn't.

Autotune takes vocals as input and changes each pitch to match a specific pitch etc.

Vocal mixing, might fix individual notes that were just sung wrong like an A that had accidenitally become A# a single time in the chorus and stuff like that, you don't go through all pitches in the vocal sample, on top of that it might add reverb, compression etc. all of which has nothing to do with autotune, but improves the sound a lot.

Yeah that was started by Melodyne, that came out pretty soon after AutoTune, and that really was pretty mind blowing at the time.

But even before the "digital revolution" in sound recording producers would just record multiple vocal tracks and cut in and out on the mixing desk or cut the actual tape and splice it together. Then it was done with DAWs and samplers, now it's done with stuff like Melodyne and Autotune

And most people have no idea.

Record producers have been fixing vocals since the invention of magnetic tape.

May 02, 2022

On Monday, 2 May 2022 at 08:52:06 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 2 May 2022 at 01:43:03 UTC, claptrap wrote:

> >

However, the concept of decomposing sound into spectral components in order to modify or improve on the resulting sound has been an active field ever since ordinary computers were able to run FFT in reasonable time. So there is no reason to claim that someone suddenly woke up with this obvious idea that nobody had thought about before. It comes down to executing and hitting a wave (being adopted).

It was adopted because it was revolutionary, it took something that was a tedious and difficult manual task and made it ridiculously easy. It wasn't about fashion or getting a few bigwig producers to make it popular.

And maybe other people had thought to themselves "wouldn't it be cool if we had some tool to automatically re-tune the vocals". I mean "wouldnt it be cool if we could take this tedious manual task and automate it somehow" is probably the main driver of invention.

But to focus on that does a disservice to what is involved in actually getting it to work, and especially so in real time.

I used to loiter in a forum for audio software developers and you know how often people come in and post "I have this great idea for a product and I just need someone to implement it and we'll make loads of money", it was all the time, so much so that there was a sticky at the top of the forum telling people why it's dumb thing to post.

Genius isn't having the idea, it's more often than not making the idea work.

May 02, 2022

On Monday, 2 May 2022 at 14:34:24 UTC, claptrap wrote:

>

Genius isn't having the idea, it's more often than not making the idea work.

For the most interesting stuff is what comes from people associated with institutions like CCRMA, IRCAM and the like, but I am not sure I would ascribe genius to anything related to audio. Most of it is layers of knowledge, not one amazing discovery.

I guess Chowning's FM synthesis could qualify, but in general it is a series of smaller steps.

May 02, 2022

On Monday, 2 May 2022 at 13:44:24 UTC, Guillaume Piolat wrote:

>

I don't think there is any real reason to trust one own taste, as taste is socially constructed (cf. La Distinction from Bourdieu) and - simplifying - it reflects too much of your socioeconomic background to be significative. Music in particular particularly reflects that.

I understand what you say, but in regard to aesthetical analysis you can think in terms of multiple dimensions. Some music is "meaningful" or "complex" in many dimensions.

Socioeconomic matters, but take Eurovision or TV singer contests. When you take the average of everyones taste you end up with not-very-interesting-music, at best engaging entertainment. I was recently very disappointed in the Norwegian version of The Voice, there were some phenomenal singers, the professional jury celebrated them, but when the viewers get to vote, they voted for the guy that song a boring Beatles rendition or the singer with good dance moves… Basically, the good technical vocalists were voted out.

I guess we can discuss the merits of taste, but if "all muscians" would pick one and the majority of "non-musicians" pick another there are some objective aspects to taste that go beyond "socioeconomic" reasons.

>

French hiphop was amazing (and is popular) from 2017 to ~2021 but I don't think we have something interesting otherwise. French electro is much less interesting than the Argentinian progressive house scene for example, and that's just my opinion again.

Thanks for the tip, I'll try to find Argentinian progressive house. Latin producers often add a new flare to dance-oriented genres. (Not to mention the top hip-hop mixing duo Latin Rascals in the 80s, still worth a listen, in my opinion.).

>

A lot of good music gets produced in niches, to get completely ignored nowadays, so it would be hard to say what scene is interesting ; we all get to miss it anyway.

It is difficult to be visible when 50000 songs are released every day? (Or was it a different number? Something huge anyway.). It is quite mind blowing how transformative capable home computers have been.

>

Oversampling typically produces:
A. a phase shift
B. anti-aliasing

I don't think I understand what you mean by oversampling. Why does sampling at 96kHz instead of 48kHz have any sonic impact? It shouldn't?

>

The by-product becomes more desirable than the non-problem it solves. Now everyone wants the feature!

This is new to me, is this related to some of your plugins? Got a link?

>

I had a strange conversation about Autotune once with a 20 years old:

  • an heavily autotuned voice sounded "normal" and not-autotuned to her
  • but the talkbox in Kavinsky - Nightcall sounded ugly to her and "autotuned". She mentionned of course she didn't like the Autotune. But was unable to identify it in practice.

Maybe there is an increasing gap in music perception between people who create music as a hobby (or pros) and the average person? Last year this singer performed on The Voice Norway without any pitch-effects, and of course some would insist that it was Autotune.

(That Nightcall-song reminds me of an analog 8-channel vocoder I built from a mail-order DIY kit back in the days, from a tiny company called PAiA. :-)

May 02, 2022
On Saturday, 30 April 2022 at 07:35:29 UTC, Max Samukha wrote:
> On Friday, 29 April 2022 at 20:17:38 UTC, Walter Bright wrote:
>> On 4/29/2022 12:10 PM, Walter Bright wrote:
>>> So why did other native languages suddenly start doing it after D did to the point of it being something a language can't skip anymore?
>>
>> I've seen endless lists of features people wanted to add to C and C++. None of them were CTFE. When we added it to D, people were excited and surprised.
>
> Your lists are not representative. When D added it, our reaction was more like "finally, somebody did that!". And even today, the feature is only marginally useful because of the countless forward reference bugs. I recently filed one more (https://issues.dlang.org/show_bug.cgi?id=22981), which is not a CTFE bug per se but was encountered in another futile attempt to generate code with CTFE in a reasonable manner.

I don’t know what your threshold for “marginally useful” is, but ctfe is proving its usefulness at Symmetry Investments every day. Not as a niche feature, as a “wherever we need it, all over the place” feature.
May 02, 2022
On Monday, 2 May 2022 at 15:41:52 UTC, John Colvin wrote:

>
> I don’t know what your threshold for “marginally useful” is, but ctfe is proving its usefulness at Symmetry Investments every day. Not as a niche feature, as a “wherever we need it, all over the place” feature.

Yeah, I am aware you are using it heavily. "Marginally" is a hyperbole provoked by another compiler bug, which made me rethink and rewrite a good chunk of code.
May 02, 2022

On Monday, 2 May 2022 at 15:39:41 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 2 May 2022 at 13:44:24 UTC, Guillaume Piolat wrote:

>

Oversampling typically produces:
A. a phase shift
B. anti-aliasing

I don't think I understand what you mean by oversampling. Why does sampling at 96kHz instead of 48kHz have any sonic impact? It shouldn't?

Having thought some about this. Do you mean in AD-converters or in DSP? I don't know too much about state of the art AD-circuits, but I would imagine that they use a higher internal sample rate so that they can use an analog filter that does not affect the audible signal in a destructive way, followed by a digital correction filter followed by decimating? The result ought to be neutral?

Or are you talking about side-effects from low pass filters in the DSP process, moving the knee (-3dB) of the filter out of the audible range by using oversampling? But regardless, you should be able to use a phase-correcting allpass filter, if desired…?

I am not trying to be difficult, but I am trying to understand the context.

May 02, 2022
On 5/2/2022 12:10 AM, FeepingCreature wrote:
> That's fair, I'm kind of jumping in on the tail end of the discussion. So I'm probably missing a lot of context. I guess I just wanted to highlight that having compiletime in-language macros doesn't commit you to a compilation model that weakens the compiletime/runtime boundary. Anyway, I just have a hard time of seeing how the CLR target relates to this. Just because Nemerle targeted the CLR doesn't make it an interpreted language with regard to CTFE, because targeting the CLR doesn't actually buy you any simplicity in this. You can compile to code that is then loaded back into the running context just as easily on x86 as on CLR. For the purpose of compiler design, the CLR is just a processor, no?

Look at it this way. The runtime of Nemerle includes a compiler. This is quite different from CTFE, which does not rely on a compiler in the runtime.
May 02, 2022

On Monday, 2 May 2022 at 17:36:46 UTC, Ola Fosheim Grøstad wrote:

>

Or are you talking about side-effects from low pass filters

Yes, in a DSP process, upsampling and downsampling are two lowpass filters themselves.

>

But regardless, you should be able to use a phase-correcting allpass filter, if desired…?

You can go linear phase yes, but you need to choose between

  • introducing latency for all frequencies (linear phase),
  • or introducing a phase shift just for the basses (minphase). minphase is typically used because more efficient and better quality. linphase sounds "metallic".