January 24
On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>
>>> On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:
>>>> Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.
>>>
>>> Right, I handle this as part of my scheduler and worker pool.
>>>
>>> The language has no knowledge, nor need to know any of this which is why it is not in the DIP.
>> 
>> Without having a notion on how this might work I can't reasonably comment on this DIP.
>> 
>>> How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).
>> 
>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>> 
>> Rust has a Waker, C++ has the await_suspend function, etc.
>
> Are you wanting this snippet?
>
> ```d
> // if any dependents unblock them and schedule their execution.
> void onComplete(GenericCoroutine);
>
> // Depender depends upon dependency, when dependency has value or completes unblock depender.
> // May need to handle dependency for scheduling.
> void seeDependency(GenericCoroutine dependency, GenericCoroutine depender);
>
> // Reschedule coroutine for execution
> void reschedule(GenericCoroutine);
>
> void execute(COState)(GenericCoroutine us, COState* coState) {
>     if (coState.tag >= 0) {
>         coState.execute();
>
>         coState.waitingOnCoroutine.match{
>             (:None) {};
>
>             (GenericCoroutine dependency) {
>                 seeDependency(dependency, us);
>             };
>
>             // Others? Future's ext.
>         };
>     }
>
>     if (coState.tag < 0)
>         onComplete(us);
>     else
>         reschedule(us);
> }
> ```
>
> Where ``COState`` is the generated struct as per Description -> State heading.
>
> Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated.
>
> Due to this depending on sumtypes I can't put it in as-is.
>
> Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.

First off: nice work on the proposal here; I really like it. Would love to try it once it's in an beta stage as it's quite promising.
As an individual that implemented their own userspace eventloop via fibers, I would love to have another utility in my belt to use in the implementation.

The only thing I had a hard time figuring out what you ment by "If it causes an error, this error is guaranteed to be wrong in a multi-threaded application of it.";
What I think is that you mean that any exception created / captured by the coroutine is guranteed to be indeed an execption and should be threaded as such.
Correct me if I'm wrong.

Another thing is the visibility of the members of the created struct; shouldn't some of them be read-only (aka const for anyone outside) or completly be private?
Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?

> Then `yield` would be a keyword, which in turn breaks code which is known to exist.

Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.

For yield the only thing I can think of is to introduce a way like `Fiber.yield`, maybe `Coro.yield` that gets picked up by any dlang edition that understands coroutine and gets rewritten into a proper yield while older versions would see a reference to an function / field, which can be provided to these editions as a symbol with `static assert(false, "...")` to inform them about the inproper usage; but that would have the same problems as there could well be already such a construct... But if we're using an attribute, I like the `@yield` from Quirin's post a lot more (and `__yield` seems very clumpsy to me).

> Rust has a Waker, ...
> ...
> ```
> coState.waitingOnCoroutine.match{
>     (:None) {};
>     (GenericCoroutine dependency) {
>         seeDependency(dependency, us);
>     };
>     // Others? Future's ext.
> };
> ```

The waker design seems much more flexible than a dependency system. For example, with wakers one could implement asyncronous IO by using epoll and invoking the waker when there's date available. I'm a bit confused on how that would look in your proposal. Sure your executor uses a match on a sumtype to determine what's it waiting on, but how does one "register" a custom dependency type? Granted, the compiler can scan the code and pickup any type thats been waited on as a dependency, but how does a executor know how to handle it? Currently, the type must be known beforehand from the executor, thus meaning that the executor and the IO library must be developed as one, instead of being two seperate things that only share a common protocol between them. And even when having compiler support for sumtypes, when the sumtype is dynamically created, there will be times where the sumtypes does not contain all types the executor can process, ending up with unreachable branches which could lead to compiler warnings or even errors that are cryptic.

While I agree that we should have a notion on how coroutines can be put to sleep until an certain event took place, I think dependencies aren't a great solution to that. As mentioned would a waker API be better suited for this task as it lets executor and IO be their own thing instead of trying to forcefully combine it into one.
January 24
On 24/01/2025 5:33 PM, Mai Lapyst wrote:
> On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>
>> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>>
>>>> On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:
>>>>> Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.
>>>>
>>>> Right, I handle this as part of my scheduler and worker pool.
>>>>
>>>> The language has no knowledge, nor need to know any of this which is why it is not in the DIP.
>>>
>>> Without having a notion on how this might work I can't reasonably comment on this DIP.
>>>
>>>> How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).
>>>
>>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>>>
>>> Rust has a Waker, C++ has the await_suspend function, etc.
>>
>> Are you wanting this snippet?
>>
>> ```d
>> // if any dependents unblock them and schedule their execution.
>> void onComplete(GenericCoroutine);
>>
>> // Depender depends upon dependency, when dependency has value or completes unblock depender.
>> // May need to handle dependency for scheduling.
>> void seeDependency(GenericCoroutine dependency, GenericCoroutine depender);
>>
>> // Reschedule coroutine for execution
>> void reschedule(GenericCoroutine);
>>
>> void execute(COState)(GenericCoroutine us, COState* coState) {
>>     if (coState.tag >= 0) {
>>         coState.execute();
>>
>>         coState.waitingOnCoroutine.match{
>>             (:None) {};
>>
>>             (GenericCoroutine dependency) {
>>                 seeDependency(dependency, us);
>>             };
>>
>>             // Others? Future's ext.
>>         };
>>     }
>>
>>     if (coState.tag < 0)
>>         onComplete(us);
>>     else
>>         reschedule(us);
>> }
>> ```
>>
>> Where ``COState`` is the generated struct as per Description -> State heading.
>>
>> Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated.
>>
>> Due to this depending on sumtypes I can't put it in as-is.
>>
>> Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.
> 
> First off: nice work on the proposal here; I really like it. Would love to try it once it's in an beta stage as it's quite promising.
> As an individual that implemented their own userspace eventloop via fibers, I would love to have another utility in my belt to use in the implementation.
> 
> The only thing I had a hard time figuring out what you ment by "If it causes an error, this error is guaranteed to be wrong in a multi- threaded application of it.";
> What I think is that you mean that any exception created / captured by the coroutine is guranteed to be indeed an execption and should be threaded as such.
> Correct me if I'm wrong.

Atila had a problem with this also. I haven't been able to change it as he didn't give me anything to work from, which you did, thank you.

"If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."

> Another thing is the visibility of the members of the created struct; shouldn't some of them be read-only (aka const for anyone outside) or completly be private?

I don't see a reason to do so (we can change this later if it is shown to be a problem).

Its meant for library authors to have full control over lifetimes, and inspect general lifecycle stuff.

End users should never see it.

If they can see it without explicit opting into it that is something we should probably close a hole on.

> Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?

You may wish to complete a coroutine early.

Nothing bad should happen if you do this.

If it does, that is likely a compiler bug, or the user did something nasty.

>> Then `yield` would be a keyword, which in turn breaks code which is known to exist.
> 
> Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.

I don't expect code breakage.

Its a new declaration so I'd be calling for this to only be available in a new edition.

Worse case scenario we simply won't parse it in a function that isn't a coroutine.

We have multiple tools for dealing with this :)

> For yield the only thing I can think of is to introduce a way like `Fiber.yield`, maybe `Coro.yield` that gets picked up by any dlang edition that understands coroutine and gets rewritten into a proper yield while older versions would see a reference to an function / field, which can be provided to these editions as a symbol with `static assert(false, "...")` to inform them about the inproper usage; but that would have the same problems as there could well be already such a construct... But if we're using an attribute, I like the `@yield` from Quirin's post a lot more (and `__yield` seems very clumpsy to me).

If this is needed I'm sure we can figure something out.

I'm hopeful that we'll have stuff like this figured out if changes are needed prior to it being turned on. Although I am currently doubtful of it.

>> Rust has a Waker, ...
>> ...
>> ```
>> coState.waitingOnCoroutine.match{
>>     (:None) {};
>>     (GenericCoroutine dependency) {
>>         seeDependency(dependency, us);
>>     };
>>     // Others? Future's ext.
>> };
>> ```
> 
> The waker design seems much more flexible than a dependency system. For example, with wakers one could implement asyncronous IO by using epoll and invoking the waker when there's date available. I'm a bit confused on how that would look in your proposal. Sure your executor uses a match on a sumtype to determine what's it waiting on, but how does one "register" a custom dependency type?

Currently the DIP has no filtering on this.

It chucks the type into the sumtype (i.e. when it sees the ``await``) and its good to go.

The library would then be responsible for going "hey I don't know what this type is ERROR".

We may need to filter things out, which we could do once we have some experience with it. Of course it could be possible that library code can handle this just fine (what I expect).

> Granted, the compiler can scan the code and pickup any type thats been waited on as a dependency, but how does a executor know how to handle it? Currently, the type must be known beforehand from the executor, thus meaning that the executor and the IO library must be developed as one, instead of being two seperate things that only share a common protocol between them. And even when having compiler support for sumtypes, when the sumtype is dynamically created, there will be times where the sumtypes does not contain all types the executor can process, ending up with unreachable branches which could lead to compiler warnings or even errors that are cryptic.

Yes, my implementation is all in one. Eventloop + coroutine library.

This will likely need some further design work to see if we can split them without exposing any nasty details of the coroutine library to people who should never see it.

I don't see an issue with the sumtypes as far as usage is concerned.

```d
static if (is(Dependency : Future!ReturnType, ReturnType)) {
} else static if (is(Dependency : GenericCoroutine)) {
} else {
	static assert(0, "what type is this?");
}
```

> While I agree that we should have a notion on how coroutines can be put to sleep until an certain event took place, I think dependencies aren't a great solution to that. As mentioned would a waker API be better suited for this task as it lets executor and IO be their own thing instead of trying to forcefully combine it into one.

They are not necessarily the same thing, although there are benefits in doing so (like sharing the same thread pool).

In my library I have something called a future completion.
This is the backbone of my eventloop library for when events take place and you want to get notification into the hands of the user like reading from a socket (with the value that was read).

https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/future_completion.d#L216

Essentially it allows you to use the coroutine abstraction to return a specific value out and it works with the scheduler as if it was user defined. Except it will never be completed by the scheduler, it is done by some other code.

I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.

January 24

On Friday, 24 January 2025 at 06:16:27 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

"If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."

With error you mean an exception? As there are compiler errors (as in the compiler refuses to compile something), and execptions (i.e. throw X). Just makeing sure we're on the same page. If so, then I get what you are meaning and should ofc be the case, as is not really different as non-multithreaded non-coroutine code: any exception thrown shouldn't be a false-positive as long as the logic guarding it is not flawed in any form.

> >

Like tag: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?

You may wish to complete a coroutine early.

Nothing bad should happen if you do this.

If it does, that is likely a compiler bug, or the user did something nasty.

Hmmm, thats indeed a reason for changing tag; you wouldn't need a cancelation token as the tag is this cancelation token to some extend. On that note, we could add a third negative value to indicate an coroutine was canceled from an external source or one could generally specify that any negative value means canceled and libraries can "encode" their own errorcodes into this...

> > >

Then yield would be a keyword, which in turn breaks code which is known to exist.

Which is the same with await; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the .await method, so instead of writing await X, you have X.await. This dosn't break exisiting code as .await is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the .await way instead of adding a new keyword.

I don't expect code breakage.

Its a new declaration so I'd be calling for this to only be available in a new edition.

Sadly it will; take for example my own little attempt to build a somewhat async framework ontop of fibers: https://github.com/Bithero-Agency/ninox.d-async/blob/f5e94af440d09df33f1d0f19557628735b04cf43/source/ninox/async/futures.d#L42-L44 it declares a function await for futures; if await will become a general keyword, it will have the same problems as if yield becomes one: all places where await was an identifier before become invalid.

>

Worse case scenario we simply won't parse it in a function that isn't a coroutine.

Which could be done also with yield tbh. I dont see why await is allowed to break code and yield is not. We could easily make both only available in coroutines / @async functions.

>

I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.

It's easier, as it describes how an coroutine should be woken up by the executor, a dependency system is IMO more complicated because you need to differentiate between dependencies whereas Wakers serve only one purpose: wakeup a coroutine / Future that was pending before to be re-polled / executed.


I've read a second time through your DIP and also took a look at your implementation and have some more questions:

>

opConstructCo

You use this in the DIP to showcase how an coroutine would be created, but it's left unclear if this is part of the DIP or not. Which is weird because without it the translation

ListenSocket ls = ListenSocket.create((Socket socket) {
	...
});

to

ListenSocket ls = ListenSocket.create(
	InstantiableCoroutine!(__generatedName.ReturnType, __generatedName.Parameters)
		.opConstructCo!__generatedName);
);

would not be possible as the compiler would not know that opConstructCo should be invoked here.

Which also has another problem: how do one differentiate between asyncronous closures and non-asyncronous closures? Because you clearly intend here to use the closure passed to ListenSocket.create as an coroutine, but it lacks any indicator that it is one. Imho it should be written like this:

ListenSocket ls = ListenSocket.create((Socket socket) @async {
	...
});
>

GenericCoroutine

Whats this type anyway? I understand that COState is the state of the coroutine, aka the __generatedName struct which is passed in as a generic parameter and I think the execute(COState)(...) function is ment to be called through a type erased version of it that is somehow generated from each COState encountered. But what is GenericCoroutine itself? Is it your "Task" object that holds not only the state but also the type erased version of the execute function for the executor?

>

Function calls

I also find no information in the DIP on how function calls itself are transformed. What the transformation of a function looks like is clear, but what about calling them in a non-async function? I would argue that this should be possible and have an type that reflects that they're a coroutine as well as the returntype, similar to rust's Future<T>. This would also proof that coroutines are zero-overhead, which I would really like them to be in D.

>
struct AnotherCo {
    int result() @safe @waitrequired {
        return 2;
    }
}

int myCo() @async {
    AnotherCo co = ...;
    // await co;
    int v = co.result;
    return 0;
}

How is AnotherCo here a coroutine that can be awaited on? With my current understanding of your proposal, only functions and methods are transformed, which means that AnotherCo.result would be the coroutine, not it's whole parent struct.

January 24
On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>> 
>> Rust has a Waker, C++ has the await_suspend function, etc.
>
> Are you wanting this snippet?

No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.

The snippet you posted raises more questions than it answers to be honest. First of all I still don't know what a GenericCoroutine or what a Future is.

It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them. Why was this done? C++'s approach of having an awaiter seems simpler. For one it allows the object you are awaiting on to control the continuation directly.
January 25
On 25/01/2025 9:49 AM, Mai Lapyst wrote:
> On Friday, 24 January 2025 at 06:16:27 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> "If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."
> 
> With error you mean an exception? As there are compiler errors (as in the compiler refuses to compile something), and execptions (i.e. `throw X`). Just makeing sure we're on the same page. If so, then I get what you are meaning and should ofc be the case, as is not really different as non-multithreaded non-coroutine code: any exception thrown shouldn't be a false-positive as long as the logic guarding it is not flawed in any form.

I mean a compiler error. Not a runtime exception.

I listed it as a requirement just to make sure we tune any additional errors that can be generated towards being 100% correct. Its more for me than anyone else.

I.e. preventing TLS memory from crossing yield points.

>>> Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?
>>
>> You may wish to complete a coroutine early.
>>
>> Nothing bad should happen if you do this.
>>
>> If it does, that is likely a compiler bug, or the user did something nasty.
> 
> Hmmm, thats indeed a reason for changing `tag`; you wouldn't need a cancelation token as the tag is this cancelation token to some extend. On that note, we could add a third negative value to indicate an coroutine was canceled from an external source or one could generally specify that any negative value means canceled and libraries can "encode" their own errorcodes into this...

I don't think that we need to.

The language only has to know about -1, -2 and >= 0.

At least currently, anything below -64k you can probably set safely.
The >= 0 ones are used for the branch table, and you really want those values for that use case as its an optimization.

Just in case we had more tags in the language, they'll be more like -10 not -100k.

>>>> Then `yield` would be a keyword, which in turn breaks code which is known to exist.
>>>
>>> Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.
>>
>> I don't expect code breakage.
>>
>> Its a new declaration so I'd be calling for this to only be available in a new edition.
> 
> Sadly it will; take for example my own little attempt to build a somewhat async framework ontop of fibers: https://github.com/Bithero- Agency/ninox.d-async/blob/f5e94af440d09df33f1d0f19557628735b04cf43/ source/ninox/async/futures.d#L42-L44 it declares a function `await` for futures; if `await` will become a general keyword, it will have the same problems as if `yield` becomes one: all places where `await` was an identifier before become invalid.
> 
>> Worse case scenario we simply won't parse it in a function that isn't a coroutine.
> 
> Which could be done also with `yield` tbh. I dont see why `await` is allowed to break code and `yield` is not. We could easily make both only available in coroutines / `@async` functions.

The ``await`` keyword has been used for multithreading longer than I've been alive. To mean what it does.

Its also very uncommon and does not see usage in druntime/phobos.

As it has no meaning outside of a coroutine, it'll be easy to handle I think.

>> I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.
> 
> It's easier, as it describes how an coroutine should be woken up by the executor, a dependency system is IMO more complicated because you need to differentiate between dependencies whereas Wakers serve only one purpose: wakeup a coroutine / Future that was pending before to be re- polled / executed.

If you want to do this you can.

I did spend some time last night thinking about this.

```d
sumtype PollResult(T) = :NotReady | T;

PollResult!(int[]) co(Socket socket) @async {
	if (!socket.ready) {
		return :NotReady;
	}

	@async return socket.read(1024);
}
```

The rest is all on the library side, register in the waker, against the socket. Or have the socket reschedule as you please.

Note: the socket would typically be the one to instantiate the coroutine, so it can do the registration with all the appropriate object references.

Stuff like this is why I added the multiple returns support, even though I do not believe it is needed.

Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!

> ---
> 
> I've read a second time through your DIP and also took a look at your implementation and have some more questions:
> 
>> opConstructCo
> 
> You use this in the DIP to showcase how an coroutine would be created, but it's left unclear if this is part of the DIP or not. Which is weird because without it the translation

It is not part of the DIP. Without the operator overload example, it wouldn't be understood.

> ```d
> ListenSocket ls = ListenSocket.create((Socket socket) {
>      ...
> });
> ```
> to
> ```d
> ListenSocket ls = ListenSocket.create(
>      InstantiableCoroutine!(__generatedName.ReturnType, __generatedName.Parameters)
>          .opConstructCo!__generatedName);
> );
> ```
> would not be possible as the compiler would not know that opConstructCo should be invoked here.

Let's break it down a bit.

The compiler using just the parse tree can see the function ``opConstructCo`` on the library type ``InstantiableCoroutine``. Allowing it to flag the type as a instantiable coroutine.

It can see that the parameter in ``ListenSocket.create`` is of type ``InstantiableCoroutine`` via a little special casing (if it hasn't been template instantiated explicitly).

The argument to parameter matching only needs to verify that the parameter has the flag that it is a instantiable coroutine, and the argument is some kind of function, it does not need to instantiate any template.

Once matched, then it'll do the conversion and instantiations as required.

I've played with this area of dmd, it should work. Although if the parameter is templated, then we may have trouble, but I am not expecting it for things like sockets especially with partial arguments support.

https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/instanceable.d#L72

> Which also has another problem: how do one differentiate between asyncronous closures and non-asyncronous closures? Because you clearly intend here to use the closure passed to `ListenSocket.create` as an coroutine, but it lacks any indicator that it is one. Imho it should be written like this:
> ```d
> ListenSocket ls = ListenSocket.create((Socket socket) @async {
>      ...
> });
> ```

See above, it can see that it is a coroutine by the parameter, rather than on the argument.

Even with the explicit ``@async`` it is likely that the error message would have to do something similar to detect that case. Otherwise people are going to get confused.

You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".

>> GenericCoroutine
> 
> Whats this type anyway? I understand that `COState` is the state of the coroutine, aka the `__generatedName` struct which is passed in as a generic parameter and I think the `execute(COState)(...)` function is ment to be called through a type erased version of it that is somehow generated from each COState encountered. But what is `GenericCoroutine` itself? Is it your "Task" object that holds not only the state but also the type erased version of the execute function for the executor?

I didn't define ``GenericCoroutine`` in the DIP, as it wasn't needed.

Indeed, this is my task abstraction with the type erased executor for execution.

Think of the hierarchy as this, it is what I have implemented (more or less), and you could do it differently if it doesn't suit you:

```d
struct GenericCoroutine {
	bool isComplete();
	CoroutineCondition condition();
	void unsafeResume();
	void blockUntilCompleteOrHaveValue();
}

struct Future(ReturnType) : GenericCoroutine {
	ReturnType result();
}

struct InstantiableCoroutine(ReturnType, Parameters...) {
	Future!ReturnType makeInstance(Parameters);
	InstantiableCoroutine!(ReturnType, ughhhhh) partial(Args...)(Args); // removes N from start of Parameters

	static InstantiableCoroutine opConstrucCo(CoroutineDescriptor : __descriptorco)();
}
```

https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine

Consider why ``GenericCoroutine`` exists, internals, the scheduler ext. cannot deal with a typed coroutine object, it must have an untyped one.

Here is how I do it: https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/builder.d#L47

>> Function calls
> 
> I also find no information in the DIP on how function calls itself are transformed. What the transformation of a function looks like is clear, but what about calling them in a non-async function?

Currently they cannot be.

It was heavily discussed, and I did support it originally.

It was decided that the amount of code that will actually use this is minimal enough, and there are problems/confusion possible that it wasn't worth it for the time being.

See the ``Prime Sieve`` example for one way you can do this.

I can confirm that it does work in practice :)

https://github.com/Project-Sidero/eventloop/blob/master/examples/networking/source/app.d#L398

> I would argue that this should be possible and have an type that reflects that they're a coroutine as well as the returntype, similar to rust's `Future<T>`. This would also proof that coroutines are zero-overhead, which I would really like them to be in D.
> 
>> ```d
>> struct AnotherCo {
>>     int result() @safe @waitrequired {
>>         return 2;
>>     }
>> }
>>
>> int myCo() @async {
>>     AnotherCo co = ...;
>>     // await co;
>>     int v = co.result;
>>     return 0;
>> }
>> ```
> 
> How is `AnotherCo` here a coroutine that can be `await`ed on? With my current understanding of your proposal, only functions and methods are transformed, which means that `AnotherCo.result` would be the coroutine, not it's whole parent struct.

Nothing in ``AnotherCo`` would be transformed.

The ``await`` statement does two things.

1. It assigns the expression's value into the state variable for waiting on.
2. It yields.

It doesn't know, nor care what the type of the expression resolves to.
The expression has no reason to be transformed in any way.

Also there struct/classes are inherently defined as supporting methods that are ``@async``, what happens it the this pointer for that type, goes after the state struct pointer post transformation and you have to explicitly pass it in (via partial perhaps?).

January 25
On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:
> On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>
>> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>>>
>>> Rust has a Waker, C++ has the await_suspend function, etc.
>>
>> Are you wanting this snippet?
> 
> No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.

It should be scheduled:

If: tag >= 0

And: waitingOnCoroutine == None || (waitingOnCoroutine != None && waitingOnCoroutine.isCompleteOrHaveValue)

Where isCompleteOrHaveValue is: tag < 0 || haveValue

The DPI does not require you to do any of this (if things are written correctly it should not segfault and hopefully won't corrupt anything), but this would be good practice. And yes it is library code. The compiler does not help you to do any of this. You the library author are responsible for it.

If you want to do something different like a waker style where these rules do not apply, you are free to. The language only requires the tag to be >= 0 due to the branch table stuff.

> The snippet you posted raises more questions than it answers to be honest. First of all I still don't know what a GenericCoroutine or what a Future is.

I wrote it out for someone else here: https://forum.dlang.org/post/vn14i8$1g46$1@digitalmars.com

```d
struct GenericCoroutine {
	bool isComplete();
	CoroutineCondition condition();
	void unsafeResume();
	void blockUntilCompleteOrHaveValue();
}

struct Future(ReturnType) : GenericCoroutine {
	ReturnType result();
}

struct InstantiableCoroutine(ReturnType, Parameters...) {
	Future!ReturnType makeInstance(Parameters);
	InstantiableCoroutine!(ReturnType, ughhhhh) partial(Args...)(Args); // removes N from start of Parameters

	static InstantiableCoroutine opConstrucCo(CoroutineDescriptor : __descriptorco)();
}
```

https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine

If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes.

If it turns out those attributes are not enough (I am not expecting any to be needed), we can add some to allow your library to communicate to the compiler on how it needs to do the slicing and dicing of the function into the state object that you can consume and call.

> It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.

It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it!

Remember ``await`` statement does two things, assign to ``waitingOn`, then yield (aka return) (and set tag appropriately).

> Why was this done? C++'s approach of having
> an awaiter seems simpler.

This is the C# approach.

https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#1298-await-expressions

A significantly more mature solution where we have Adam who has experience working with it since it was created in teams. He has dealt with all the problems that come with that. I don't have a stake holder who fits the bill for other styles.

In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.

> For one it allows the object you are awaiting
> on to control the continuation directly.

If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that.

You could even add support for it as part of instantiation of the coroutine! Its your library code, you can do whatever you want on this front.

You control execution of the coroutine itself, you can see that this value was set. You can inspect it, you can call whatever you like.

That is what the last example with ``
void execute(COState)(GenericCoroutine us, COState* coState) {`` shows. You are fully in control over the coroutines execution the language is focused solely on the slicing and dicing of the function into something that a library can then call.

The language defines none of this _on purpose_.

January 25
On Friday, 24 January 2025 at 23:22:19 UTC, Richard (Rikki) Andrew Cattermole wrote:
> On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:
>> On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>
>>> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>>>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>>>>
>>>> Rust has a Waker, C++ has the await_suspend function, etc.
>>>
>>> Are you wanting this snippet?
>> 
>> No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.
>
> If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes.

Well, then the DIP needs to be more explicit that the compiler is merely doing the code transformation, that the created a coroutine frame needs to be driven completely by library code, and that the types that are awaited on are opaque to the compiler and simply passed along to library code.

>> It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.
>
> It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it!

The name `GenericCoroutine` suggests there is type erasure, but if the library driving the coroutine can work with the direct types that are awaited on, that would work.

As an optimisation possibility it would be good if the coroutine frame could have some storage space for async operations, which would allow us to eliminate some heap allocations. The easiest way to support that is by having the compiler call a predefined function on the object in the await expression (say `getAwaiter`), whose returned object would be stored in the coroutine frame. This offers quite a bit of flexibility for library authors without putting any burden on the user.

In the Fiber support in my Sender/Receiver library there is only one single allocation per yield point. Would be good if we can get at least as few allocations.

> This is the C# approach.
>
> https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#1298-await-expressions

From that link:

"The operand of an await_expression is called the task. It represents an asynchronous operation that may or may not be complete at the time the await_expression is evaluated. The purpose of the await operator is to suspend execution of the enclosing async function until the awaited task is complete, and then obtain its outcome."

Note that `task` is a way better name than `Future`.

And:

"The task of an await_expression is required to be awaitable.
An expression t is awaitable if one of the following holds:
[...]
- t has an accessible instance or extension method called GetAwaiter
[...]
The purpose of the GetAwaiter method is to obtain an awaiter for the task.
[...]
The purpose of the INotifyCompletion.OnCompleted method is to sign up a “continuation” to the task; i.e., a delegate (of type System.Action) that will be invoked once the task is complete."

You see? It defines the mechanism by which to resume an awaitable.

> In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.

It is hard for me to see if there are any shortcomings at this point.
Is there an dmd implemention I could try to integrate with?

> > For one it allows the object you are awaiting
> > on to control the continuation directly.
>
> If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that.
>
> [...]
>
> The language defines none of this _on purpose_.

As mentioned above, this needs to be made more clear in the DIP.

One possible challenge with this flexibility is whether it isn't too flexible. It is not uncommon to have multiple eventloops in a program, potentionally coming from distinct libraries. Without a common mechanism to resume awaitables from each it might result in incompatibility galore.
January 26
On 25/01/2025 11:38 PM, Sebastiaan Koppe wrote:
> On Friday, 24 January 2025 at 23:22:19 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:
>>> On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>>
>>>> On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:
>>>>> On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>>>> You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption.
>>>>>
>>>>> Rust has a Waker, C++ has the await_suspend function, etc.
>>>>
>>>> Are you wanting this snippet?
>>>
>>> No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.
>>
>> If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes.
> 
> Well, then the DIP needs to be more explicit that the compiler is merely doing the code transformation, that the created a coroutine frame needs to be driven completely by library code, and that the types that are awaited on are opaque to the compiler and simply passed along to library code.

It is in there.

"The language feature must not require a specific library to be used with it."

But, you want it to be stated again some place, will do.

I am very happy that we have got this resolved.

>>> It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.
>>
>> It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it!
> 
> The name `GenericCoroutine` suggests there is type erasure, but if the library driving the coroutine can work with the direct types that are awaited on, that would work.
> 
> As an optimisation possibility it would be good if the coroutine frame could have some storage space for async operations, which would allow us to eliminate some heap allocations. The easiest way to support that is by having the compiler call a predefined function on the object in the await expression (say `getAwaiter`), whose returned object would be stored in the coroutine frame. This offers quite a bit of flexibility for library authors without putting any burden on the user.

My main concern is it'll result in stack memory escaping.

We may want to limit that with an attribute, but that is an open problem that isn't going to limit us for the time being.

> In the Fiber support in my Sender/Receiver library there is only one single allocation per yield point. Would be good if we can get at least as few allocations.

The best way to handle that is one allocation (at CT) for the descriptor that you can instantiate coroutines form.

Then one big allocation for all the different structs involved.

Could use a free list to optimize that a bit.

Some interesting possibilities here for someone that cares.

>> This is the C# approach.
>>
>> https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/ language-specification/expressions#1298-await-expressions
> 
>  From that link:
> 
> "The operand of an await_expression is called the task. It represents an asynchronous operation that may or may not be complete at the time the await_expression is evaluated. The purpose of the await operator is to suspend execution of the enclosing async function until the awaited task is complete, and then obtain its outcome."
> 
> Note that `task` is a way better name than `Future`.

It can be, I intentionally tried to conflate a promise and a coroutine into a single object.

There is a bunch of fairly standard names for this stuff, whatever I picked people would have opinions on and since its my stuff, I can disregard them. PhobosV3 would need to be argued about.

> And:
> 
> "The task of an await_expression is required to be awaitable.
> An expression t is awaitable if one of the following holds:
> [...]
> - t has an accessible instance or extension method called GetAwaiter
> [...]
> The purpose of the GetAwaiter method is to obtain an awaiter for the task.
> [...]
> The purpose of the INotifyCompletion.OnCompleted method is to sign up a “continuation” to the task; i.e., a delegate (of type System.Action) that will be invoked once the task is complete."
> 
> You see? It defines the mechanism by which to resume an awaitable.
> 
>> In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.
> 
> It is hard for me to see if there are any shortcomings at this point.
> Is there an dmd implemention I could try to integrate with?

There is no language implementation currently, only my library (which hasn't made it to branch tables just yet, and I'll wait for language support before hand).

Sadly it is not priority to implement this year even if it is accepted, stuff like escape analysis is up this year.

>> > For one it allows the object you are awaiting
>> > on to control the continuation directly.
>>
>> If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that.
>>
>> [...]
>>
>> The language defines none of this _on purpose_.
> 
> As mentioned above, this needs to be made more clear in the DIP.
> 
> One possible challenge with this flexibility is whether it isn't too flexible. It is not uncommon to have multiple eventloops in a program, potentionally coming from distinct libraries. Without a common mechanism to resume awaitables from each it might result in incompatibility galore.

I fear the opposite, that any attempted typed vtable to merge implementations is going to have minimal use and for all intents and purposes they will each be too specialized for their use case to make it worth adding.

Consider vibe.d a lot of projects are derived from it, and they use its abstractions.

In PhobosV3 is meant to take on the role of a correct event based library, so the hope is it'll be a root. Most likely there would also be one focused on speed.

Do you really want the correct-but-slower design to be used as part of the fast-with-assumptions implementation?

January 25

On Saturday, 25 January 2025 at 13:41:24 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

The await keyword has been used for multithreading longer than I've been alive. To mean what it does.
Its also very uncommon and does not see usage in druntime/phobos.

So "preventing breaking" is only reserved for phobos then, and any user-written code is fine to break at every moment. I find that a very problematic way when implementing / enhancing a language. "Dont break userspace" comes to mind; we should first and foremost be concerned with users interacting with the feature (which you seem to be concerned with as well), and as such I would'nt want to break all existing asyncronous libraries out there when the new edition rolls around. This makes dlang seem even more broken and "too niche" for people to use as any async library up to this point used in examples, tutorials etc will horrobly break.

>

As it has no meaning outside of a coroutine, it'll be easy to handle I think.

Then the DIP should specify it. Either the tokens await becomes an hard-keyword, disallowing any identifier usage of it, or it becomes a soft one, where it only acts as a keyword in @async contexts and like an normal identifier outside of it. You even link C#'s definition of it that has the (somewhat) exact wording needed for it:

Inside an async function, await shall not be used as an available_identifier although the verbatim identifier @await may be used. There is therefore no syntactic ambiguity between await_expressions and various expressions involving identifiers. Outside of async functions, await acts as a normal identifier.
>

Stuff like this is why I added the multiple returns support, even though I do not believe it is needed.

Which multiple return support? The DIP states clearly that it is NOT supported.

>

Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!

Yes, but honestly you do the same: your dependency system define how libraries need to interact with coroutines, the same way waker does. I dont want to argue that wakers dont define a library usage as well, but dependencies to so as well.

>

It is not part of the DIP. Without the operator overload example, it wouldn't be understood.

Then do not put it into the DIP. It should only contain your design and whats possible with it, without having to rely on possible future DIP's to add some operators to make your DIP actually work.

>

The compiler using just the parse tree can see the function opConstructCo on the library type InstantiableCoroutine. Allowing it to flag the type as a instantiable coroutine.

Again: this description says that the compiler treats opConstructCo differently as other functions. What would happen if I want to use another name? What will happen if I have multiple functions with the same signature but different names?

>

See above, it can see that it is a coroutine by the parameter, rather than on the argument.

So the argument (lambda) would not be a coroutine and could not use await or @async return? This seems counter-intuitive, as I clearly can see that code as this will exist:

ListenSocket ls = ListenSocket.create((Socket socket) @async {
	auto line = await socket.readLine();
	// ...
});

therefore the function should be anotated to be async; espc. bc you say time and time again it should be useable by users without prior knowlage of the insides of the system. Makeing it that functions can only have await if they're @async but lambdas are whatever they want to be seems like a hughe boobytrap.

>

You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".

It makes things clearer for the writer (and future readers), and by extend the compiler as it now certainly knows to slice the lambda as well as this is the intention of the developer.

>

It was heavily discussed

Where exactly? Haven't seen it yet sorry. And even then: these should be part of the DIP under a section "non-goals" or "discarded ideas" so people know that a) they where considered and b) what where the considerations that lead to the decision.

>

See the Prime Sieve example for one way you can do this.

I've seen it, but again: it uses undeclared things that aren't as clear as day if your'e not the writer of the DIP.

InstantiableCoroutine!(int) ico = \&generate;
Future!int ch = ico.makeInstance();

Why does this work? generate is an coroutine, but why can it be "just" assigned to an library shell? Does it "just work"? Thats not how programming works or how standards should be written. I could see that you ment that an constructor that takes an template parameter with the __descriptorco should be used, but again: it is not stated in the DIP and as such should not be taken as "granted" just bc you expect people to come to the conclusion themself. Look at C++ papers, they are hughe for a reason: EVERYTHING gets written down so no confusion can happen.

>

The await statement does two things.

  1. It assigns the expression's value into the state variable for waiting on.
  2. It yields.

Then please for the love of good put it into the DIP! I'm sorry that im so picky about this, but a specification (what your DIP is), should contain every detail of your idea not only the bits gemini deemed as important. We're humans, and as such we should be espc carefull to give us each other as much information as possible.

>

Whereas the other approaches including C++ is still after much reading not in my mental model.

I somewhat start to get a graps of yours, while in your model, you try to just "throw" the awaited-on back to anyone interested in it and use an sumtype to do it, other languages define an stricter interface that need to be followed: c++ with awaiters and rust with its Future<>s and Waker`s. Both ways prevent splits in the ecosystem or that only one library gets on top while everything else just dies. Thats what I tbh fear with the current approach: there will be one way to use dependencies and thats it. The problems it have will extend to all async code and an outside viewer will declare async in dlang broken without anyone realising thats just the library thats broken. Take dlang's std.regex for example: it's very slow in comparison with others and you easily could roll your own, but nobody does so everybody just assumes it's a "dlang" problem and moves on. While this has only minimal impact bc it's just regex, with an entire language feature that will be presented through the lens of the most used or most "present" library (not popular! big difference), this will make people say "Hey dlangs async is so bad bc. that and that". I want to prevent such a thing.

With an more strict protocol on how things are awaited (c++) or a coroutine can be "retried" / woken up (rust) these problems go away. Any executor can rely on the fact that any io / waiting structure will follow protocol, and as such they're interchangeable, which comes to a big benefit of user and application code as noone needs to re-invent the whole weel.

Another benefit is also thag it (somewhat) helps in ensuring that the coroutine is actually in a good state without the executor needing to know about that state itself.

To help understanding a bit more the two models lets take a look at a "typical" flow of a coroutine:

  • starts coroutine
  • initiate read_all() of a file
  • awaits the read_all() and pauses the coroutine
  • gets re-called since the waited on part is now resolved
  • processes data

In your proposal this works by setting a dependency on the read_all()'s returntype. If now the executor simply ignores the dependency, it recalls the coroutine and the coroutine is in a bad state, as it does not validate if the dependency is actually resolved (how would it?). As a result, you would need to put it inside a loop:

ReadDependency r = ...;
while (!r.isReady) {
  await r;
}

Which is boilerplait best avoided.

Secondly the read_all itself: It and the exector would need to agree on an out-of-language protocol on how to actually handle the dependency; this will mostlikely be that an library would expose an interface like Awaitable that any dependency would need to implement, but with the downside that any dependent now has an explicit dependency on said library. Sure, maybe over time a standard set of interfaces would araise that the community would adapt, but then we have the API-dependency hell in java just re-invented.

In C++ the co_await dictates that the coroutine is blocked as long as the Awaiter protocol says it does, since any user expects that the awaited thing is actually resolved after it's awaited. It dosn't mater if successfully or not the keypoint is that it's not pending anymore.

In rust it's even simpler: polling is an concept that even kids understand: when you want your parents to give you something, you "poll" until they give it to you or tell you no in a way that keeps you from continuing what you originaly wanted to do. Same thing in rust: a coroutine is "polled" by the exector and can either resolve with the data you expected, or tell you that's it still waiting and to come back later. The compiler ensures that only ever a ready state is allowed to continue the coroutine. If you want to be more performant and not spin-lock in the executor in the hopes that someday the future will resolve, you can give it a waker and say: "hey, if you say you are still not done, I will do other things; if you think you're ready for me to try again, just call this and I will come to you!".

January 26
On 26/01/2025 5:44 AM, Mai Lapyst wrote:
> On Saturday, 25 January 2025 at 13:41:24 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> The ``await`` keyword has been used for multithreading longer than I've been alive. To mean what it does.
>> Its also very uncommon and does not see usage in druntime/phobos.
> 
> So "preventing breaking" is only reserved for phobos then, and any user- written code is fine to break at every moment. I find that a very problematic way when implementing / enhancing a language. "Dont break userspace" comes to mind; we should first and foremost be concerned with users interacting with the feature (which you seem to be concerned with as well), and as such I would'nt want to break all existing asyncronous libraries out there when the new edition rolls around. This makes dlang seem even more broken and "too niche" for people to use as any async library up to this point used in examples, tutorials etc will horrobly break.

The ``await`` statement only works in a coroutine, it should not break anything.

Its entirely new code that it applies to.

Old code that uses that identifier won't be compatible with the new eventloop anyway, and probably won't be desirable to call (i.e. blocking where you don't want it to block).

We have strict rules these days on breaking code, which is to not do it.
The breaking changes and deprecations section reflects this.

I have no intention on breaking anything in this proposal as it isn't needed.

>> As it has no meaning outside of a coroutine, it'll be easy to handle I think.
> 
> Then the DIP should specify it. Either the tokens `await` becomes an hard-keyword, disallowing any identifier usage of it, or it becomes a soft one, where it only acts as a keyword in `@async` contexts and like an normal identifier outside of it. You even link C#'s definition of it that has the (somewhat) exact wording needed for it:
> ```
> Inside an async function, await shall not be used as an available_identifier although the verbatim identifier @await may be used. There is therefore no syntactic ambiguity between await_expressions and various expressions involving identifiers. Outside of async functions, await acts as a normal identifier.
> ```

It depends.

If we get editions, then it can be a keyword in a new edition, but not in an old one.

If we don't get it, it can be a soft keyword where it only applies in context of a coroutine.

Whatever is picked, it will be tuned towards "non-breaking".

>> Stuff like this is why I added the multiple returns support, even though I do not believe it is needed.
> 
> Which multiple return support? The DIP states clearly that it is **NOT** supported.

For C#, not the proposed feature.

Adding "This is a return that does not complete the coroutine, to enable multiple value returns." to make it very explicit that this is what it is offering.

>> Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!
> 
> Yes, but honestly you do the same: your dependency system define how libraries need to interact with coroutines, the same way waker does. I dont want to argue that wakers dont define a library usage as well, but dependencies to so as well.

This isn't what I am meaning.

The DIP only defines the language transformation, you are responsible for how it gets called, and what can be waited upon ext.

I.e. if you don't support ``await`` statements, you can static assert out if they are used.

```d
__generatedName generatedFromCompilerStateStruct = ...;
...
static assert(co.WaitingON.__tags.length == 1);
```

Or something akin to it.

It could return a waker, socket or anything else. You control what can be waited upon. The language isn't filtering it.

>> It is not part of the DIP. Without the operator overload example, it wouldn't be understood.
> 
> Then do not put it into the DIP. It should **only** contain your design and whats possible with it, without having to rely on possible future DIP's to add some operators to make your DIP actually work.

The operator overload ``opConstructCo`` is part of the DIP.

Therefore there are examples for it.

But the library types such as ``GenericCoroutine``, ``InstantiableCoroutine``, and ``Future`` are what isn't part of the DIP and they are needed to show how the language feature can be used.

>> The compiler using just the parse tree can see the function ``opConstructCo`` on the library type ``InstantiableCoroutine``. Allowing it to flag the type as a instantiable coroutine.
> 
> Again: this description says that the compiler treats `opConstructCo` differently as other functions. What would happen if I want to use another name? What will happen if I have multiple functions with the same signature but different names?

It is an operator overload, like any other.

You use what the language specifies end of.

It has the ``op`` prefix, which is established for use by operator overload methods.

>> See above, it can see that it is a coroutine by the parameter, rather than on the argument.
> 
> So the argument (lambda) would not be a coroutine and could not use `await` or `@async return`? This seems counter-intuitive, as I clearly can see that code as this will exist:
> ```d
> ListenSocket ls = ListenSocket.create((Socket socket) @async {
>      auto line = await socket.readLine();
>      // ...
> });
> ```

Almost got to a good example on this, the ``await`` is a statement not an expression.

It'll be easier to transform into the state machine.

```d
ListenSocket ls = ListenSocket.create((Socket socket) @async {
     auto line = socket.readLine();
     await line;
     // ...
});
```

Lambdas if you do not specify types in the parameter lists, are actually templates.

It is explicitly required in this case that it'll take the ``@async`` attribute from the parameter on ``create`` based upon the parameter type.

Which does imply that we cannot limit ``await`` statements and ``@async`` returns during parsing. Which shouldn't be a problem due to the whitespace. ``await ...;`` not ``await;`` and there are no attributes on statements currently (but there are for declarations).

> therefore the function should be anotated to be `async`; espc. bc you say time and time again it should be useable by users without prior knowlage of the insides of the system. Makeing it that functions can only have `await` if they're `@async` but lambdas are whatever they want to be seems like a hughe boobytrap.
> 
>> You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".
> 
> It makes things clearer for the writer (and future readers), and by extend the compiler as it now certainly knows to slice the lambda as well as this is the intention of the developer.

We infer attributes on templates.

I see no difference here.

Not doing it here, seems like it would create more surprises then not.

>> It was heavily discussed
> 
> Where exactly? Haven't seen it yet sorry. And even then: these should be part of the DIP under a section "non-goals" or "discarded ideas" so people know that a) they where considered and b) what where the considerations that lead to the decision.

This is a trust me, adding such a section is non-helpful.

It ends up derailing things for the D community.

>> See the ``Prime Sieve`` example for one way you can do this.
> 
> I've seen it, but again: it uses undeclared things that aren't as clear as day if your'e **not** the writer of the DIP.
> ```d
> InstantiableCoroutine!(int) ico = \&generate;
> Future!int ch = ico.makeInstance();
> ```

"Given the following _potential_ shell of a library struct that is used for the purpose of examples only:"

Added the clarification at the end that it is only used for example, but it was stated as part of ``Constructing Library Representation``.

> Why does this work? `generate` is an coroutine, but why can it be "just" assigned to an library shell? Does it "just work"? Thats not how programming works or how standards should be written. I **could** see that you ment that an constructor that takes an template parameter with the `__descriptorco` should be used, but again: it is not stated in the DIP and as such should not be taken as "granted" just bc you expect people to come to the conclusion themself. Look at C++ papers, they are **hughe** for a reason: EVERYTHING gets written down so no confusion can happen.

This is described in ``Constructing Library Representation``.

The relevant lowering is:

```d
// The location of this struct is irrelevant, as long as compile time accessible things remain available
struct __generatedName {
}

InstantiableCoroutine!(int, int) co = InstantiableCoroutine!(int, int)
	.opConstructCo!__generatedName;
```

>> The ``await`` statement does two things.
>> 1. It assigns the expression's value into the state variable for waiting on.
>> 2. It yields.
> 
> Then please for the love of good put it into the DIP! I'm sorry that im so picky about this, but a **specification** (what your DIP is), should contain **every detail of your idea** not only the bits gemini deemed as important. We're humans, and as such we should be espc carefull to give us each other as much information as possible.

Gemini is a test to see how well it could be understood prior to humans having to review it. If it cannot pass that, it cannot pass a human.

Hmm, ``Yielding`` does cover the tag side of things, but not the variable assignment in the state.

``// If we yield on a coroutine, it'll be stored here``

It was indeed added to the generated state struct, just not at the yielding side of it.

Also added to exceptions too.

>> Whereas the other approaches including C++ is still after much reading not in my mental model.
> 
> I somewhat start to get a graps of yours, while in your model, you try to just "throw" the awaited-on back to anyone interested in it and use an sumtype to do it, other languages define an stricter interface that need to be followed: c++ with awaiters and rust with it`s `Future<>`s and `Waker`s. Both ways prevent splits in the ecosystem or that only one library gets on top while everything else just dies. Thats what I tbh fear with the current approach: there will be one way to use dependencies and thats it. The problems it have will extend to all async code and an outside viewer will declare async in dlang broken without anyone realising thats just the library thats broken. Take dlang's std.regex for example: it's very slow in comparison with others and you easily could roll your own, but nobody does so everybody just assumes it's a "dlang" problem and moves on. While this has only minimal impact bc it's just regex, with an entire language feature that will be presented through the lens of the most used or most "present" library (not popular! big difference), this will make people say "Hey dlangs async is so bad bc. that and that". I want to prevent such a thing.

Talking about regex engines... guess what I've been writing over the last two months :) And no, I cannot confirm that it is easy, especially with the Unicode stuff.

Other languages define the library stuff and directly tie it into the language lowering.

This proposal does not do that. It is purely the transformation.

How you design the library is on the library author, not the language!

One of the lessons we have learned about tieing the language to a specific library is it tends to err on the side of not working for everyone.

D classes are a great example of this, forcing you to use the root class ``Object``, and hit issues with attributes, monitor ext.

I don't intend for us to make the same mistake here, especially on a subject where people have such different views on how it should work.

> With an more strict protocol on how things are awaited (c++) or a coroutine can be "retried" / woken up (rust) these problems go away. Any executor can rely on the fact that any io / waiting structure **will** follow protocol, and as such they're interchangeable, which comes to a **big** benefit of user and application code as noone needs to re-invent the whole weel.

So do it that way. Neither I, nor the language will stop you!

> Another benefit is also thag it (somewhat) helps in ensuring that the coroutine is actually in a good state without the executor needing to know about that state itself.
> 
> To help understanding a bit more the two models lets take a look at a "typical" flow of a coroutine:
> - starts coroutine
> - initiate `read_all()` of a file
> - `await`s the `read_all()` and pauses the coroutine
> - gets re-called since the waited on part is now resolved
> - processes data
> 
> In your proposal this works by setting a dependency on the `read_all()`'s returntype. If now the executor simply ignores the dependency, it recalls the coroutine and the coroutine is in a bad state, as it does not validate if the dependency is actually resolved (how would it?). As a result, you would need to put it inside a loop:

Sounds like a bug, if it allows you to ``await`` and not actually respect it.

> ```d
> ReadDependency r = ...;
> while (!r.isReady) {
>    await r;
> }
> ```
> Which is boilerplait best avoided.

Agreed.

I do not like this waker design. It seems highly inefficient.

I prefer the dependency design, as you will only be executed if you have what you need to make progress.

But if you the library author wants to do it differently, all I can say is go for it!

> Secondly the read_all itself: It and the exector would need to agree on an out-of-language protocol on how to actually handle the dependency; this will mostlikely be that an library would expose an interface like `Awaitable` that any dependency would need to implement, but with the downside that any dependent now has an explicit dependency on said library. Sure, maybe over time a standard set of interfaces would araise that the community would adapt, but then we have the API-dependency hell in java just re-invented.

That is correct, the language level transformation that this DIP proposes does not deal with this library stuff. The usage in examples is just that example code to show it can be utilized.

If I were to propose a specific approach to this, I would have people complaining that it doesn't work the way that they want it to and for good reason.

My library uses the ``GenericCoroutine`` and ``Future`` to do all of this.

With the help of what I call future completion that is a ``Future`` in API, but isn't actually a coroutine. Which is how my socket reads return.

https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/future_completion.d#L216

> In C++ the `co_await` dictates that the coroutine is blocked as long as the `Awaiter` protocol says it does, since any user **expects** that the `await`ed thing is actually resolved after it's `await`ed. It dosn't mater if successfully or not the keypoint is that it's **not pending** anymore.

Yeah, the way I view it is that a coroutine has to be complete (error, have a value ext.), or have a value before continuation to occur (multiple return).

But the language transformation isn't responsible for guaranteeing it.
Although I would recommend it.

> In rust it's even simpler: polling is an concept that even kids understand: when you want your parents to give you something, you "poll" until they give it to you or tell you no in a way that keeps you from continuing what you originaly wanted to do. Same thing in rust: a coroutine is "polled" by the exector and can either resolve with the data you expected, or tell you that's it still waiting and to come back later. The compiler ensures that only ever a ready state is allowed to continue the coroutine. If you want to be more performant and not spin- lock in the executor in the hopes that someday the future will resolve, you can give it a waker and say: "hey, if you say you are still not done, I will do other things; if you think you're ready for me to try again, just call this and I will come to you!".

Yes, that is a kind of dependency approach. But it is done by means other than how I do it.

The DIP as far as I know (and I've done some minimal exploration in this thread), should work for this. Since the language knows nothing about how your scheduler works.