January 23
On Thursday, 23 January 2025 at 09:49:54 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> On 23/01/2025 10:00 PM, Sebastiaan Koppe wrote:
>> A good starting point would be the official reference https:// en.cppreference.com/w/cpp/language/coroutines
>
> I didn't seen anything worth adding. So I haven't.

I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.
January 23

On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

Stackless coroutines

I might want to say, the term confused me quite a while. That’s because the coroutine does have a stack (its own stack). I thought it would somehow not have one, since it’s called “stackless,” but it just means its stack isn’t the caller’s stack. That fact was kind of obvious to me, since that’s what “coroutine” meant to me already. In my head I don’t see how a coroutine could even work otherwise.

Maybe it’s a good idea to call the proposal “Coroutines” and omit “stackless.”

January 23

On Monday, 13 January 2025 at 17:59:35 UTC, Atila Neves wrote:

>

[…]
Why @async return instead of yield? Why have to add @async to the grammar if it looks like an attribute?

Agreed. My two cents: C# has yield return and yield break. The funny thing is, if D were open to contextual keywords, we could do the same. Then, yield wouldn’t even have to become a keyword. Alternatively, use yield_return and yield_break, which, yes, are valid identifiers, but have a near-zero probability being present in existing D code.

If anything, the proper way to make yield into a keyword is __yield, not @yield.

January 24
On 24/01/2025 5:33 AM, Quirin Schroll wrote:
> On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> Stackless coroutines
> 
> I might want to say, the term confused me quite a while. That’s because the coroutine does have a stack (its own stack). I thought it would somehow not have one, since it’s called “stackless,” but it just means its stack isn’t the caller’s stack. That fact was kind of obvious to me, since that’s what “coroutine” meant to me already. In my head I don’t see how a coroutine could even work otherwise.
> 
> Maybe it’s a good idea to call the proposal “Coroutines” and omit “stackless.”

The term is correct.

A stackless coroutine, uses the thread stack, except for variables that cross a yield point in its function body. These get extracted on to the heap.

A stackful coroutine, uses its own stack, not the threads.
This is otherwise known in D as a fiber.

Over the last 20 years stackful coroutines have seen limited use, but stackless has only grown in implementations. If for no other reason than thread safety. Hence the association. But the word itself could mean either, which is why the DIP has to clarify which it is, although the spec may not add it.

January 24
On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:
> On Thursday, 23 January 2025 at 09:49:54 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>
>> On 23/01/2025 10:00 PM, Sebastiaan Koppe wrote:
>>> A good starting point would be the official reference https:// en.cppreference.com/w/cpp/language/coroutines
>>
>> I didn't seen anything worth adding. So I haven't.
> 
> I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.

Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value."

The reason it is not in the DIP is because this a library behavior.

On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case.

About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).

January 23

On Thursday, 23 January 2025 at 17:14:50 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:

>

I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the WaitingOn but doesn't go into detail how it actually works.

Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value."

The reason it is not in the DIP is because this a library behavior.

On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case.

About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).

No, that is not what I mean.

Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.

The execution context in this case could be the main thread, a pool, etc.

From that above mentioned C++ link:

>

The coroutine is suspended (its coroutine state is populated with local variables and current suspension point).
awaiter.await_suspend(handle) is called, where handle is the coroutine handle representing the current coroutine. Inside that function, the suspended coroutine state is observable via that handle, and it's this function's responsibility to schedule it to resume on some executor, or to be destroyed (returning false counts as scheduling)

January 24
On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:
> On Thursday, 23 January 2025 at 17:14:50 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:
>>> I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.
>>
>> Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value."
>>
>> The reason it is not in the DIP is because this a library behavior.
>>
>> On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case.
>>
>> About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).
> 
> No, that is not what I mean.
> 
> Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.
> 
> The execution context in this case could be the main thread, a pool, etc.
> 
>  From that above mentioned C++ link:
> 
>> The coroutine is suspended (its coroutine state is populated with local variables and current suspension point).
>> awaiter.await_suspend(handle) is called, where handle is the coroutine handle representing the current coroutine. Inside that function, the suspended coroutine state is observable via that handle, and __it's this function's responsibility to schedule it to resume on some executor__, or to be destroyed (returning false counts as scheduling)

Right, I handle this as part of my scheduler and worker pool.

The language has no knowledge, nor need to know any of this which is why it is not in the DIP.

How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).

January 23
On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:
>> Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.
>
> Right, I handle this as part of my scheduler and worker pool.
>
> The language has no knowledge, nor need to know any of this which is why it is not in the DIP.

Without having a notion on how this might work I can't reasonably comment on this DIP.

> How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).

You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.

Rust has a Waker, C++ has the await_suspend function, etc.
January 24
On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>
>> On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:
>>> Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.
>>
>> Right, I handle this as part of my scheduler and worker pool.
>>
>> The language has no knowledge, nor need to know any of this which is why it is not in the DIP.
> 
> Without having a notion on how this might work I can't reasonably comment on this DIP.
> 
>> How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).
> 
> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
> 
> Rust has a Waker, C++ has the await_suspend function, etc.

Are you wanting this snippet?

```d
// if any dependents unblock them and schedule their execution.
void onComplete(GenericCoroutine);

// Depender depends upon dependency, when dependency has value or completes unblock depender.
// May need to handle dependency for scheduling.
void seeDependency(GenericCoroutine dependency, GenericCoroutine depender);

// Reschedule coroutine for execution
void reschedule(GenericCoroutine);

void execute(COState)(GenericCoroutine us, COState* coState) {
    if (coState.tag >= 0) {
        coState.execute();

        coState.waitingOnCoroutine.match{
            (:None) {};

            (GenericCoroutine dependency) {
                seeDependency(dependency, us);
            };

            // Others? Future's ext.
        };
    }

    if (coState.tag < 0)
        onComplete(us);
    else
        reschedule(us);
}
```

Where ``COState`` is the generated struct as per Description -> State heading.

Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated.

Due to this depending on sumtypes I can't put it in as-is.

Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.

January 24

On Sunday, 19 January 2025 at 18:46:23 UTC, Jin wrote:

>

I see that you prefer not to notice the problems instead of solving them. Good luck to you with this undertaking - you will need it. But if this cancer of "modern" programming languages ​​creeps into D, I’ll finally switch to some Go.

I think you have a misunderstanding (or mutliple here). Nobody here want's to take away threads or fibers from the language. And also: even threads and fibers have many of the same problems than stackless coroutines; the only difference really is the implementation and somewhat their usage.

>
  • [Low performance due to the inability to properly optimize the code.] (https://page.hyoo.ru/#!=btunlj_fp1tum/ View'btunlj_fp1tum'.Details=%D0%90%D1%81%D0%B8%D0%BD%D1%85%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B9%20%D0%BA%D0%B5%D0%B9%D1%81)

Benchmarking is always only as good and usefull when used in the right environments. I can easily create benchmarks that also show how "slow" fibers are and how "fast" async is, as well as otherwise. Hell anyone could claim that just spawning more OS threads is "somehow" faster than any greenthread or async continuation if they just tweak their workload enough; because at the end of the day its exactly that whats the key to benchmarks: workload.

Any form of concurrency only really excells at what they are doing if used in an workload where it is key to do things concurrent / in parallel, which mainly is IO bound applications such as webservers. Any linear job, such as calculating a fibonacci number will always be slower when bloated with ANY form of concurrency. Just go ahead and try re-implementing it with fibers or OS Threads where every call to fib(n) spawns a new thread and joins it. I think anyone would agree that thats just insane waste of performance, which it rightfully is! Nobody in their right mind would try to calculate it in parallel because its still only a "simple" calculation.

Another thing is when you have to deal with Millions (!) of concurrent request on an webserver where there's no gurantee that any of the request resolve in linear time, or with other words, without waiting on another thing in some form, which is a stark contrast to a fibonacci calculation which always be resolveable without any further waiting once it's started. This is due to the purity of these two workloads: fibonacci is pure as it only ever require the inputs you give it directly. But 99.99% of any webrequest deals with some form of waiting: be it because you have an database you need to wait for, an cache adapter like redis or a file you need to read: IO is a large portion of time waiting for it. Thats why we have invented Fibers or async in the first place: spending the precious time we would wait otherwise doing actual work.

>
  • [The need to reinvent the stack as an AsyncContext.](https:// github.com/tc39/proposal-async-context)

This need only arises from poorly used global variables / "impure" code, as the example you reference very good demonstrates; the async code captures all explicitly passed values to functions correctly. Only in the example where a "shared" variable (an global for all that matters here), is introduced, problems start to creep in. These problems also arise if one uses fibers btw, as globals are always a source of errors if not managed correctly. Thats one of the reasons D supports writing "pure" code: if you eliminate any implicit outside truth and only consider values explicitly passed via parameters or return values, your code magically gets way safer and also for a compiler to optimize for.

And btw, even threads and fibers have this context problem: because of that, we're invented thread-locals, or for the case of fibers, fiber locals. Just look and vibe.d; they build ontop of fibers and added a fiber-local storage because globals are inheritently a problem in physically all concurrent code, not only async/await stackless coroutines.

>

But if this cancer of "modern" programming languages ​​creeps into D, I’ll finally switch to some Go.

Thats funny that you mention go, as it has even some of the flaws you yourself mentioned; it has the same context problem with globals; it expects you (like many other languages) to use an mutex to protect it or use an type literally named 'Context'. Sure it has additionaly some race detection, but that gets you only so far. And your point on how you need an extra CancellationToken type: thats also true for any threading and/or fibers, and in go it's litterally one of the first thing you learn: waitgroups and context (again).

And I would ask you to keep this negativity out of these sort of discussions. Again, nobody will take away threads or fibers; all thats propsed here is that we get another tool in our toolbox. If you want to continue using fibers you're free to go. I also would mention that I wouldn't want fibers to be removed once stackless coroutines landed in D; D is an language for everyone, and as such should give as many tools to people as they need. There will always be some tool thats not used by everyone, but I see that as a win. Better have one tool to much than lacking it and resorting to weird hacks to get stuff working.