January 28

On Friday, 24 January 2025 at 04:32:21 UTC, Mai Lapyst wrote:

>

I think you have a misunderstanding (or mutliple here). Nobody here want's to take away threads or fibers from the language.

This is not the first day in programming and, unfortunately, I know perfectly well how the Overton window works:

  • No one will suffer from another feature X. Do not want to - do not use.
  • Here you have a new API / library, but it is available only through X. Please rewrite part of your code through X.
  • There are too many crutches for X in the code. In order to get rid of them, you need to rewrite your entire code on X.
  • X is actively supported, while the “outdated” mode of operation is drowned in problems that have not been repaired for years, some of which appeared during the implementation of X.
  • At some point, the code without X simply stops working and you have no choice completely left. See the example: https://www.npmjs.com/package/fibers
>

Benchmarking is always only as good and usefull when used in the right environments. I can easily create benchmarks that also show how "slow" fibers are and how "fast" async is, as well as otherwise.

When the smart indicates the problem, the fool looks at the finger.
Asynchronous function can not be inlined at the place of use and use a faster allocation on the stack. Benchmark only shows that even the coolest modern JIT compilers are not able to optimize them.

>

Just go ahead and try re-implementing it with fibers or OS Threads where every call to fib(n) spawns a new thread and joins it. I think anyone would agree that thats just insane waste of performance, which it rightfully is! Nobody in their right mind would try to calculate it in parallel because its still only a "simple" calculation.

You either decisively do not understand what you are talking about, or do intentional demagogy. Both options do not honor you. Where one single fiber will have a dozen fast calls and the optional one yield somewhere in the depths, with async functions you will have a dozen slow asynchronous calls, even if no asynchrony (for example, because caching) will not be required.

>

This need only arises from poorly used global variables / "impure" code, as the example you reference very good demonstrates;

Any useful code is not pure. You either did not understand the problem that AsyncContext solves, or did not try to understand. Global (or rather Thread/Fiber Local) variables are used for special programming techniques that allow you to write a simpler, more reliable and effective code. I will not carry out a lecture on reactive programming, logging in exceptional situations and tracking user actions. See this series for example: https://dev.to/ninjin/perfect-reactive-dependency-tracking-85e
Concurrent access to variables with multi-threading has nothing to do with it.

January 28

On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:

>

You either decisively do not understand what you are talking about, or do intentional demagogy. Both options do not honor you.

Hello, I don't want to delete your post because it has relevant insights, but personal attacks are not allowed in this forum. Please phrase your criticism in a less hostile manner. Thank you.

February 01
Perma: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142

I won't do another round. The functionality hasn't changed, but there have been clarifications to how I describe some things as people have requested. Along with a new abstract.

I intend to put it into the queue before next monthly meeting. So roughly a week away.
February 03

On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:

>

This is not the first day in programming and, unfortunately, I know perfectly well how the Overton window works:

Sure, but you forget a curcial detail in your analysis: Humans and Intention. Other projects (maybe with coporate funding behind them) will indeed throw usability under the bus for some sweet sweet money (i.e. Blockchain / AI), but Dlang is an community effort, entirely driven and held up by humans that put their heart into it. Throwing both into the same bin and makeing conclusions about them isn't fair game.

While the possibility that the same happens to Dlang, it will only because theres no one currently working on these features, and thats only because theres generally to few contributors which inturn is an effect bc theres hardly any help for onbording / mentoring an new contributor onto the project, which also comes from the lack of people wanting to put effort into the language. But thats all a management- & reputation issue of the project, not an ill-intend or genral lack of empaty towards people that prefer to work at a lower level (i.e. golang, which dosnt even give you access to their greenthreads implemntation for you to tweak!).

>

Asynchronous function can not be inlined at the place of use and use a faster allocation on the stack.

First: they are just functions, ofc they will use stack allocation whenever possible, just like normal functions. The only difference is the state machine thats wrapped around it. Any value that needs to survive into another state will put outside of the stack. But it still dosnt say how this memory gets allocated, as this is the responsibility of the executor driving the statemachine! So you can perfectly fine allocate it onto the stack as well, removing any "perfomance issue" that might arise. Only downside then is that your statemachine is somewhat useless, but that would be as well if you did it by hand, so it's then more a design problem of the executor than the technique.

>

Benchmark only shows that even the coolest modern JIT compilers are not able to optimize them.

Because you once again try to compare apples with oranges. Ofc will be linear code with "normal" functions be way more performant if you just look at the asm generated by them and draw your conclusion. But once again: async is for handling waiting for states which you dont know when they will be ready, such as damn IO! You just can't predict when your harddrive / kernel will answer the request for more data, you can just wait until it says so! Thats why blocking IO fell out of favor: it just stalls your programm and you cant do anything else. How did we solve that? Right, by introducing parallelism via threads with is just doing code asyncronous to each other!!! But it was slow bc of kernel context switches, the solution? Move it to userspace, aka fibers / greenthreading / lightwightthreads! Same technique, other place; still the same idea of parallelism by executing code seemingly asyncronous to each other. Async functions / state machienes are just the next evolution of that, just like we thought one day that goto for simple branching would be to cumbersome to write + it can go a lot wrong, so we created if X ... else ..., for X ..., while X ... and so forth!

>

Where one single fiber will have a dozen fast calls and the optional one yield somewhere in the depths, with async functions you will have a dozen slow asynchronous calls, even if no asynchrony (for example, because caching) will not be required.

Sure, but you again dont compare them fairly. An async call (with use of await) is just like calling Fiber.yield()! So to compare them on a same level, you would need to yield in every call, makeing my analogy to an own thread per fib(n) call understandable. So yes, ofc will async functions for a non-async task be wasetfull, but so will be using threads for the same thing! At the end, it's not the techniques fault if an programmer just uses it wrong; fib(n) shouldn't be parallel (in any form!) to begin with.

>

Any useful code is not pure.

Fib is pure. Addition is pure. Any arithmetic is pure. Modifing an object via an method that only changes fields is pure. I would argue they are usefull, unless ofc you thing that any code anywhere is a waste of time, then we can stop the whole thing right here. I'm not saying that we'll all only should do functional programming (I also dont like overuse of it!), but considering what it teaches you, isn't a bad thing; like pureness (which dlang has itself!) and effects. Just go ahead: grab any code and tell dlang it should output you the processed dlang code (i.e on run.dlang.io the "AST" button), you'll see quickly that a ton of functions are actually marked pure by the compiler while containing sensible and usefull code!

>

Global (or rather Thread/Fiber Local) variables are used for special programming techniques that allow you to write a simpler, more reliable and effective code.

It's optimization. Just like makeing a feature to automatically creates & optimize state machienes is. Once again: if you're fine with writing functions by hands, no one is stopping you. Fibers in dlang are an entirely library driven construct. You can just rip them out of phobos and maintain an own version. Nothing prevents you from that!

February 03

On Friday, 31 January 2025 at 16:12:43 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

Perma: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142

There's still no writing about opConstructCo. Maybe a bit of background will help you. I'm in the process of writing an own frontend for the dlang language, and for me as an sudden implementor of the language, I find myself inside a unique spot for interacting wich such feature requests.

Like I said; you explaint that only opConstructCo is the method / function that should be called to construct coroutines by the AST lowering, even called it a "new operator", but you still failed to include it in the section for changes to the language documentation. While it's correct that it's not needed for the grammar section, it is described in an text with starts with "a potential shell" telling me as a implementor that this is not required while infact it is; just like you said: its an new operator, so it should be clearly marked as such. Maybe in a new section after the grammar changes, or wherever, as long as it's more clearly that it is required as you yourself said:

>

It is not part of the DIP. Without the operator overload example, it wouldn't be understood.
....
The operator overload opConstructCo is part of the DIP.
Therefore there are examples for it.

So it is part of the DIP! Please state it so without adding "potential" or "purpose of examples only" before the only occurence of it inside the whole document.

Maybe a change like this:

# Implementation

...

Implementors also need to be aware of the new `opConstructCo` operator which is used as an way to morph coroutine objects into library understandable types.

Or something similar.

>

D classes are a great example of this, forcing you to use the root class Object, and hit issues with attributes, monitor ext.

I understand the sentiment behind it, and I agree that forcing the wrong things can be unproductive. But then again, it's an language, even requiring the spelling of an attribute is forcing the hand of programmers, so one or more types in core, will not be much of a difference. I only fear that introducing such a complicated technique will lead to either more fractation of the community and/or vendor log-in for the only library that will araise out of this, leading to more problems of root classes like Object that everyone finds unfortunate but is not willing enough to go head-to-head with said library to make things better.

It's also a concern about people new to the language; it's still hard as it is to get into dlang with it many pitfalls, not only that classes are GC'd, but also things like postblitting which completly work differnetly than any other language and makes seemingly local created instances suddenly globally shared between the parent instances. My fear is that introducing a non-easy to understand way of using asyncronous functions will lead to incompatible libraries that throw errors nobody quite understands, epsc beginners that will drive them out of the room entirely they're not even halfway in.

But it will remain to be seen what the future holds. Like you said, it's atleast able to morph into the other provided solutions, so for the start any knowledgeable enough person can write their abstraction ontop of it to get started using dlang's coroutines and we'll see how it all plays out.

February 04
On 04/02/2025 10:43 AM, Mai Lapyst wrote:
> On Friday, 31 January 2025 at 16:12:43 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> Perma: https://gist.github.com/rikkimax/ fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142
> 
> There's still no writing about `opConstructCo`. Maybe a bit of background will help you. I'm in the process of writing an own frontend for the dlang language, and for me as an sudden implementor of the language, I find myself inside a unique spot for interacting wich such feature requests.

"In the following example, a new operator overload ``opConstructCo`` static method is used in an example definition of a library type that represents a coroutine. It is later used in the construction of the library type from the language representation of it."

Is that better?

A link to your frontend would be appreciated, I'd like to see if you've done UAX31/C23 identifiers (yet).

> Like I said; you explaint that only `opConstructCo` is the method / function that should be called to construct coroutines by the AST lowering, even called it a "new operator", but you still failed to include it in the section for changes to the language documentation. While it's correct that it's not needed for the grammar section, it is described in an text with starts with "a *potential* shell" telling me as a implementor that this is **not** required while infact it is; just like you said: its an new operator, so it should be clearly marked as such. Maybe in a new section after the grammar changes, or wherever, as long as it's more clearly that it is required as you yourself said:

Its not in the grammar section because operator overloads are not here:

https://dlang.org/spec/grammar.html

>> It is not part of the DIP. Without the operator overload example, it wouldn't be understood.
>> ....
>> The operator overload ``opConstructCo`` is part of the DIP.
>> Therefore there are examples for it.
> 
> So it is part of the DIP! Please state it so without adding "potential" or "purpose of examples only" before the **only occurence** of it inside the whole document.
> 
> Maybe a change like this:
> ```
> # Implementation
> 
> ...
> 
> Implementors also need to be aware of the new `opConstructCo` operator which is used as an way to morph coroutine objects into library understandable types.
> ```
> Or something similar.

See above.

February 04
On 04/02/2025 12:42 PM, Richard (Rikki) Andrew Cattermole wrote:
>     Like I said; you explaint that only |opConstructCo| is the method /
>     function that should be called to construct coroutines by the AST
>     lowering, even called it a "new operator", but you still failed to
>     include it in the section for changes to the language documentation.
>     While it's correct that it's not needed for the grammar section, it
>     is described in an text with starts with "a /potential/ shell"
>     telling me as a implementor that this is *not* required while infact
>     it is; just like you said: its an new operator, so it should be
>     clearly marked as such. Maybe in a new section after the grammar
>     changes, or wherever, as long as it's more clearly that it is
>     required as you yourself said:
> 
> Its not in the grammar section because operator overloads are not here:
> 
> https://dlang.org/spec/grammar.html <https://dlang.org/spec/grammar.html>

Okay I changed my mind.

"In addition to syntax changes there is a new operator overload ``opConstructCo``  which is a static method. This will flag the type it is within as an instanceable library coroutine type."

February 04

On Tuesday, 4 February 2025 at 02:30:49 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

A link to your frontend would be appreciated, I'd like to see if you've done UAX31/C23 identifiers (yet).

I hadn't added UAX31 since I've used DLang's grammar specification to implement the lexer and it didn't contain them at the time of writing (and how it seems at qick glance still).
But it's not that big of a deal, although I have to thank you bc it revealed a slight problem in creating code position data when using utf8 codepoints.

Also didn't send a link bc I didn't knew if anyone is interested, but here ya go.

>

"In addition to syntax changes there is a new operator overload opConstructCo which is a static method. This will flag the type it is within as an instanceable library coroutine type."

That sound's awesome! Thank you!

6 days ago

On Monday, 3 February 2025 at 21:43:09 UTC, Mai Lapyst wrote:

>

On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:
FYI
https://reductor.dev/cpp/2023/08/10/the-downsides-of-coroutines.html
https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md

1 2 3 4
Next ›   Last »