Coroutines! Oh coroutines.
Recently Walter has shown some interest in me describing my library implementation thereof, due to my belief that they should be used for asynchronous tasks and hence need to be part of the language.
Before I begin I just want to say, that while I have a library implementation working, this is not the only way to do coroutines, nor is the proposed syntax.
So to begin with, let's define what the library author considers the library API to be.
struct InstantiableCoroutine(ResultType, Args...) {
this(T:__coroutine)(immutable T co);
bool isNull();
bool isInstantiated();
InstantiableCoroutine makeInstance(Args args);
GenericCoroutine opCast(T=GenericCoroutine)();
Future opCast(T=Future!ResultType)();
GenericCoroutine waitingOn();
COResult!ResultType result();
bool isComplete();
void unsafeContinue() @system;
}
This is the premise that the other two API representations are built from.
What we wish to describe in this API, is a type that can be instantiated with new instances, after having a descriptor passed in via say a closure or from function pointer (whose function is being compiled also).
It needs to be able to tell the user some information of it, if it is instantiated and if so, is it currently complete or waiting on another coroutine.
Along with the respective values.
Lastly we must be able to continue execution once we are no longer waiting on our condition.
Of note is the GenericCoroutine
and Future
. Take notice of the template arguments that are provided by the opCast
's. Their respective methods are a subset of InstantiableCoroutine
's.
For my implementation I use structs and reference counting.
I find that this works quite well.
However due to the possibility of cyclic relationships, a GC may be desired if you want a more naive solution.
The GenericCoroutine
is where the real magic happens.
It is what all the coroutine API's boil down to without templates.
When you do dependency analysis on what coroutine is waiting on another, and when no longer waiting on another sent off to the worker pool to execute, this is the abstraction that is used internally to the event loop.
Next up is a specialized coroutine implementation that I call "future-completion".
A future completion uses the same API as above, but one very key difference.
The descriptor state does not come from the user.
It comes from a prebuilt library function that only wants to know the return type of the future.
It will only complete, after it has been externally triggered as such.
This can be done for say a socket read based upon such functions as poll
.
For reference counting this helps break up cycles since you can store the trigger separately from the reference counted abstraction.
So it does a lot of jobs, I have found it to be irreplacable.
As a feature, it is quite easily one of my most genius ideas ever.
Simply because it is an ordinary coroutine, that integrates into the worker dependency state for coroutines, and does not require the user to know that it is special.
For a language feature, it should not be tied to a given library implementation.
There is no reason for it to be.
Done properly and with enough understanding of the compiler, it should be possible to write code similar to what I posted earlier, without using any attributes to say that this is a coroutine object that can be implicitly constructed.
It can see that it can be because of the constructor template check.
This leads us to wanting to describe the user experience:
struct ListenSocket {
static ListenSocket listen(A...)(InstantiableCoroutine!(Socket, A) coroutine, ushort port);
}
ListenSocket.listen((Socket socket) /* @notls */ {
writeln(socket.read(4));
// is equivalent to
/+
Future!(ubyte[]) __temp0 = socket.read(4);
await __temp0;
assert(__temp0.isComplete);
writeln(__temp0.result);
+/
}, port: 8080);
It looks sequential.
The user knows nothing about the coroutines happening underneath.
This is quite honestly the holy grail of asynchronous programming that we as a field have been studying and trying to make work since the 1950's.
See[0] for more information.
We can do it.
There is nothing stopping us except political will.
[0] https://www.amazon.com.au/Concurrent-Programming-C-R-Snow/dp/0521339936
[1] https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine
[2] https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/tasks/future_completion.d
[3] https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/internal/workers.d#L169
My code while not ready for users, does work.
See [1] for coroutines API, builder has the unit test.
[2] for future completions logic (yes there is unittest in there!).
And [3] for the internal coroutine dependency state.
For those interested in the state that the compiler would generate consider:
struct CO_Object_xx(LibraryType) {
enum Functions = [&__stage1, &__stage2];
alias UserVars = AliasSeq!(...);
alias ResultType = ...;
static struct State {
Stage stage;
LibraryType waitingOn;
UserVars vars;
ResultType result;
version(D_BetterC) {
} else {
Exception resultException;
}
~this(); // cleanup if required
bool isComplete() {
return this.stage == Stage.CompleteValue || this.stage == Stage.CompleteException;
}
}
enum Stage {
Stage_1,
Stage_2,
ReadyToStart,
CompleteValue,
CompleteException,
}
void __stage1(scope State* state) {
}
void __stage2(scope State* state) {
}
}