May 04, 2022
On Tuesday, 3 May 2022 at 19:01:44 UTC, Walter Bright wrote:

>
> I'm surprised C# can't do CTFE. I guess its creators never thought of it :-)

Nemerle creators did (obviously inspired by LISPs and MLs).

>
> Java can create and compile code at runtime. I ran into this when creating a Java native compiler for Symantec. It was used very rarely, but just enough to sink the notion of a native compiler.

The important part is that Nemerle can execute functions at compile time - whether it's done via interpretation or compilation is not relevant to the argument. D could as well compile CTFE into native code or IL (as in newCTFE) from the start.

Also, one could argue that native code is not native code, because it is further translated by the CPU into microcode, hence the CPU is an interpreter, hence any native compiler is not native. We'd rather accept that IL is native code to the VM and move on.

May 04, 2022
On Wednesday, 4 May 2022 at 05:54:42 UTC, Max Samukha wrote:
> On Tuesday, 3 May 2022 at 19:01:44 UTC, Walter Bright wrote:
>
>>
>> I'm surprised C# can't do CTFE. I guess its creators never thought of it :-)
>
> Nemerle creators did (obviously inspired by LISPs and MLs).
>
>>
>> Java can create and compile code at runtime. I ran into this when creating a Java native compiler for Symantec. It was used very rarely, but just enough to sink the notion of a native compiler.
>
> The important part is that Nemerle can execute functions at compile time - whether it's done via interpretation or compilation is not relevant to the argument. D could as well compile CTFE into native code or IL (as in newCTFE) from the start.
>
> Also, one could argue that native code is not native code, because it is further translated by the CPU into microcode, hence the CPU is an interpreter, hence any native compiler is not native. We'd rather accept that IL is native code to the VM and move on.

Native code is indeed not *exactly* native code however calling a CPU an interpreter is either false or requires such a loose definition of interpreter that it loses most of its descriptive power. Basically all CPUs translate an ISA into some kind of internal state, big processors just happen to have an extra layer.

Also, I suppose this is mostly nomenclature, the instructions are translated into micro-operations/uops whereas microcode as a term is usually reserved for either extremely complex instructions or configuring CPU features, otherwise you have the term microcode referring to the same thing it always has versus an innovation Intel made with the early Pentiums (pentia?)

May 04, 2022

On Wednesday, 4 May 2022 at 06:29:55 UTC, max haughton wrote:

>

Native code is indeed not exactly native code however calling a CPU an interpreter is either false or requires such a loose definition of interpreter that it loses most of its descriptive power.

In this context there is no difference between a VM and hardware. If you can build hardware for the VM they should be considered similar.

>

Basically all CPUs translate an ISA into some kind of
internal state, big processors just happen to have an extra

The term RISC came into use to distinguish those that did require decoding from those that did not.

>

Also, I suppose this is mostly nomenclature, the instructions are translated into micro-operations/uops whereas microcode as a term is usually reserved for either extremely

Microcode refers to the instruction sequence that is used internally after decoding.

May 04, 2022
On Tuesday, 3 May 2022 at 19:01:44 UTC, Walter Bright wrote:
> On 5/3/2022 12:34 AM, Max Samukha wrote:
>> On Monday, 2 May 2022 at 20:24:29 UTC, Walter Bright wrote:
>> 
>>> It sounds just like how Lisp, Java, and C# work. Nemerle even uses the same interpreter/code generator as C#.
>> 
>> C# can't do CTFE. For example, in C#, you can't generate code (without resorting to hacks) at compile time based on UDAs the way you can in Nemerle or D. In C#, you usually process UDAs at runtime. I guess that is what you mean when you say "it needs compiler runtime at runtime". Yes, C# needs one because it must defer code generation to runtime.
>
> I'm surprised C# can't do CTFE. I guess its creators never thought of it :-)
>

https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/source-generators-overview
May 04, 2022

On Tuesday, 3 May 2022 at 15:40:45 UTC, Ola Fosheim Grøstad wrote:

>

On Tuesday, 3 May 2022 at 14:59:12 UTC, claptrap wrote:

>

Ok, but in DSP I think many ideas are obvious if you know the field, but getting the right mix, the right hacks, the right tweaks, getting it to run fast and making it sound good takes a lot of effort (or can happen as an accident :-). I certainly don't doubt that there are many years of highly skilled effort that has gone into the product as it is today. But that is solid engineering, not a moment of "wowsers!" :-D

That's pretty much my experience. The actual math / "engineering" part is fairly straightforward if you're decent at math. But making it sound good is a bit more art than science i reckon. I guess at the end of the day because its being used to make art and that is a much more subjective realm.

> >

See to me that's less impressive, I mean I reckon people were doing FM synthesis with analog hardware already. So it was more likely just a refinement, or exploration, it's actually technically pretty simple.

It is difficult to find any individual discovery that is obviously impressive, and I guess putting a sin() into another sin() may seem intuitive, given people already used LFOs. I think the work he put into making it musically useful and expressive creating new types of bell-like sounds is why people emphasis his contribution. I find this wiki-quote a bit funny: «This was Stanford's most lucrative patent at one time, eclipsing many in electronics, computer science, and biotechnology.»

It's just that building blocks in an FM synthesiser are quite simple, at least conceptually, I reckon I could knock one up in about 30 minutes, just the audio part anyway. Even the math is pretty straight forward, what sidebands you'll get etc... I think maybe it seems complicated to the end user cause it's not very user friendly to make presets. But it's actually pretty simple, and was probably already being done on analog gear, I mean I imagine VCOs existed with linear frequency control back then?

AutoTune, i reckon days maybe? Plus a lot of research and months of time experimenting trying to make it not sound like crap?

> >

I mean real time pitch tracking and artifact free pitch shifting are orders of magnitude harder problems than FM synthesis.

Many people worked on that though? It is very much the work of a community… In general most things in audio build on something else. Like, the concept of vocoders is in some way ingenious, but it was invented for speech in telecom by Bell labs in 1930s.

That's engineering though isn't it, the higher you get up complexity wise, the more you're building on work done by other people. It doesn't mean we should only be impressed by people who lay foundations.

May 04, 2022

On Wednesday, 4 May 2022 at 12:30:13 UTC, claptrap wrote:

>

at math. But making it sound good is a bit more art than science i reckon. I guess at the end of the day because its being used to make art and that is a much more subjective realm.

Yes, that art aspect is what makes this field interesting too, as there is no objectively right and wrong tool. If you can enable artists to create new "modes" of expression, you have a success! (Even if it as simple as setting all lower bits to zeros in a bitcrusher.)

Same thing in visuals, when the researchers got bored with photo-realistic rendering and started looking at non-photo-realistic rendering you also open up for endless new possibilities of artistic toolmaking.

>

It's just that building blocks in an FM synthesiser are quite simple, at least conceptually, I reckon I could knock one up in about 30 minutes, just the audio part anyway. Even the math is pretty straight forward, what sidebands you'll get etc...

Yes, I agree. I only mentioned it because it is difficult to find areas where you can point to one person doing it all on his own. Also, doing computer music at this time on mainframes must have been tedious! The most well known tool from that time period is Music V, which has an open source successor in CSound. The latter actually has an online IDE that one can play with for fun: https://ide.csound.com/

>

That's engineering though isn't it, the higher you get up complexity wise, the more you're building on work done by other people. It doesn't mean we should only be impressed by people who lay foundations.

Sure, any tool that enable artists to create new expressions more easily are valuable, but is quite rare that something has not been tried in the past, or something close to it.

May 04, 2022

On Wednesday, 4 May 2022 at 00:41:10 UTC, zjh wrote:

>

On Tuesday, 3 May 2022 at 23:09:31 UTC, mee6 wrote:

>

I think Elon Musk isn't really an engineer.

A capitalist with 230B!
Man that doesn't pay taxes.

Yes and banks give him near 0 interest loans that he doesn't have to pay tax on. The whole financial system needs to be gutted. It's a legacy system were stuck in.

May 04, 2022

On Monday, 2 May 2022 at 12:07:17 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 2 May 2022 at 08:57:21 UTC, user1234 wrote:

>

The concept of "windowing" + "overlapp add" to reduce artifacts is quite old, e.g the Harris Window is [1978]. Dont known for better ones (typically Hanning).
This doubles the amount of FFT required for a frame but you seem to say this was technically possible.

Yes, I assume anyone who knows about FFT also knows the theory for windowing? The theoretically "optimal" one for analysis is DPSS, although Kaiser is basicially the same, but I never use those.

OK, I thought the artifacts you mentioned were about not using a window, or the rectangular window ;)

May 04, 2022

On Wednesday, 4 May 2022 at 17:23:48 UTC, user1234 wrote:

>

OK, I thought the artifacts you mentioned were about not using a window, or the rectangular window ;)

Understood, one trick is to shift the whole content sideways 50% before doing the FFT so that the phase information you get is on the center of the bellshaped top of the window function. Which simplifies calculations.

(IIRC simpler effects can be done with 50% overlap, and 75% for more general usage.)

May 04, 2022

On Wednesday, 4 May 2022 at 17:38:28 UTC, Ola Fosheim Grøstad wrote:

>

On Wednesday, 4 May 2022 at 17:23:48 UTC, user1234 wrote:

>

OK, I thought the artifacts you mentioned were about not using a window, or the rectangular window ;)

Understood, one trick is to shift the whole content sideways 50% before doing the FFT so that the phase information you get is on the center of the bellshaped top of the window function. Which simplifies calculations.

(IIRC simpler effects can be done with 50% overlap, and 75% for more general usage.)

15 years laters Prosoniq Morph is still top notch, I hope.
The quality of the sound produced is still not reached, it'is still
the best you can do in the frequency domain ;)

Well there's is also SpectrumWorx, but the morpher you can
did with this product never reached the quality of Morph.