Thread overview
[OT] Article on coroutines: Function Colors Represent Different Execution Contexts
August 10

Here is an introductory blog article that attempts to refute the notion that function coloring in coroutines is a bad thing. Which may interest some people.

https://danieltan.weblog.lol/2025/08/function-colors-represent-different-execution-contexts

August 10

On Sunday, 10 August 2025 at 13:49:43 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

Here is an introductory blog article that attempts to refute the notion that function coloring in coroutines is a bad thing. Which may interest some people.

https://danieltan.weblog.lol/2025/08/function-colors-represent-different-execution-contexts

I'm partway through reading this paper, and it argues in a different direction.

https://www.cs.tufts.edu/~nr/cs257/archive/roberto-ierusalimschy/revisiting-coroutines.pdf

Namely for one particular form of stackful coroutine, which they term as a "full asymmetric coroutines".

There are also some well known paper prepared for the C++ WG:

  • p1364r0.pdf : Fibers under the magnifying glass
  • p0866r0.pdf : Response to "Fibers under the magnifying glass"
  • p1520r0.pdf : Response to response to "Fibers under the magnifying glass"

From memory they touch upon an important part of the impact of "function colouring".

One of the observations I found in the 866 paper was that a stackless coroutine which exceeds the lifetime of its invoker, in effect becomes an inefficiently implemented segmented stackful couroutine.

The obvious mitigation would then be to only allow such stackless corotuines to used in a "structured concurrency" manner, such that they can not exceed said lifetime.

--

As to the post you mention, I'm not sure what he is trying to suggest by 'execution bridges'. I don't believe he has proven that 'function colouring' is a good thing, or at least something to be embraced.

Possibly he needs to better expand upon his concept of 'execution bridge', as for now I just see it as unhelpful hand waving.

From a personal perspective, I've used both approaches (stackful and stackless), but I tend to view then as having different domains of applicability, and I just prefer stackful when I'm able to use it, as being easier to reason about.

--

Taking the stackless one, I've only really found it useful in the Simon Tatham form, where what is happening is explicit; and at most I would desire some light weight method (syntax, or whatever) way of achieving that.

However, I've not found the 'async' and 'await' pattern to be useful there, as it just ends up exploding the necessary mental juggling. So I don't have a concrete suggestion of how to achieve that.

I've actually found the explicit queuing of event based callbacks (see Zebra, and FRR) easier to reason about as the parts are in one's face. Still there is no 'colouring', only a single execution stack, and one still has to manually manage the state (usually in shared structures passed between even callbacks).

Blocking is handled/avoided by explicitly queuing a new callback event. That sort of scheme generally has a fairly small "runtime" to drive the callback events.

So possibly syntax for creating that form of system may be more useful than the async/await scheme? Maybe something which can automatically create a closure representing the continuation from a completed call to a routine marked as 'blocking', or maybe just a 'blocking' / 'async' call site?

August 12
On 11/08/2025 9:20 AM, Derek Fawcus wrote:
> On Sunday, 10 August 2025 at 13:49:43 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> Here is an introductory blog article that attempts to refute the notion that function coloring in coroutines is a bad thing. Which may interest some people.
>>
>> https://danieltan.weblog.lol/2025/08/function-colors-represent- different-execution-contexts
> 
> I'm partway through reading this paper, and it argues in a different direction.
> 
> https://www.cs.tufts.edu/~nr/cs257/archive/roberto-ierusalimschy/ revisiting-coroutines.pdf
> 
> Namely for one particular form of stackful coroutine, which they term as a "full asymmetric  coroutines".

"We also did not prove that we cannot express multishot continuations using coroutines."

Its from 2009 stackless coroutines were not mainstream at the time.

https://softwareengineering.stackexchange.com/a/377514

There was no argument in that paper in regards to stackless coroutines.

Stackful coroutines have a massive problem associated with them, you cannot model yields. This means things like locks can be held whilst yielding. Leading to deadlocks.

Whilst you can throw another attribute at it, it does mean another attribute that all code must be annotated with. Quite invasive.

> There are also some well known paper prepared for the C++ WG:
> 
> - p1364r0.pdf : Fibers under the magnifying glass
> - p0866r0.pdf : Response to "Fibers under the magnifying glass"
> - p1520r0.pdf : Response to response to "Fibers under the magnifying glass"
> 
>  From memory they touch upon an important part of the impact of "function colouring".
> 
> One of the observations I found in the 866 paper was that a stackless coroutine which exceeds the lifetime of its invoker, in effect becomes an inefficiently implemented segmented stackful couroutine.

Segmented stackful coroutines are for all intents and purposes their own calling convention.

As soon as we realized that, multiple of us immediately NOPED on it.

> The obvious mitigation would then be to only allow such stackless corotuines to used in a "structured concurrency" manner, such that they can not exceed said lifetime.

The parent may not be another stackless coroutine. It may be some class instance for say Windowing event handler.

Or it may be on a timer, but yes I agree if you can model a parent lifetime that would be ideal.

> Taking the stackless one, I've only really found it useful in the Simon Tatham form, where what is happening is explicit; and at most I would desire some light weight method (syntax, or whatever) way of achieving that.
> 
> However, I've not found the 'async' and 'await' pattern to be useful there, as it just ends up exploding the necessary mental juggling.  So I don't have a concrete suggestion of how to achieve that.
> 
> I've actually found the explicit queuing of event based callbacks (see Zebra, and FRR) easier to reason about as the parts are in one's face. Still there is no 'colouring', only a single execution stack, and one still has to manually manage the state (usually in shared structures passed between even callbacks).
> 
> Blocking is handled/avoided by explicitly queuing a new callback event. That sort of scheme generally has a fairly small "runtime" to drive the callback events.
> 
> So possibly syntax for creating that form of system may be more useful than the async/await scheme?  Maybe something which can automatically create a closure representing the continuation from a completed call to a routine marked as 'blocking', or maybe just a 'blocking' / 'async' call site?

Like this?

```d
Future!int toBeHad();

class SomeHandler {
	void onEvent(Task task) {
		// schedule task
	}
}

class MySomeHandler : SomeHandler {
	void thingHappens() {
		onEvent(() {
			writeln("on the event");

			int value = toBeHad().result;
			...

			writeln("done some work on ", value);
		});
	}
}
```

Almost everything there will work with my stackless coroutine DIP, the automatic passing of captured variables isn't supported and the onEvent handler needs some improvements for the return type.
August 11
On Monday, 11 August 2025 at 16:36:32 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> Stackful coroutines have a massive problem associated with them, you cannot model yields. This means things like locks can be held whilst yielding. Leading to deadlocks.

My approach is usually actor or CSP style, and hence avoiding explicit yields, and also generally doing within shared structures requiring locks.  So it hasn't been an issue.

In a recent program, I had two shared structures needing explicit locks, but they were encapsulated within constrained routines, sans comms.  Yes it required rigor, rather than the compiler proving something.

For CSP style, the obvious way to model the yield is to have syntax for the send and receive, as Go does (and Alef did), then they could be checked by the compiler (what I assume you mean by modelling), to ensure there is no yield while holding the lock.

Since there is already a Go linter doing certain lock checks (that a mutex is not copied), possibly it could be enhanced to perform the above check - i.e. no (blocking) send or receive while holding a lock.

[What I need to ponder is if yielding while having a non-premptive coroutine scheduler - i.e. not Go style - would necessarily be a problem.  My suspicion is that it wouldn't always be a problem, but it would be difficult for the tooling to detect and prove that]

[snip]

> Like this?

[snip]

I'll have a think about that and see if I can figure out what you're doing.

What was the reference to your DIP again?


August 12
On 12/08/2025 5:25 AM, Derek Fawcus wrote:
> I'll have a think about that and see if I can figure out what you're doing.
> 
> What was the reference to your DIP again?

https://github.com/dlang/DIPs/blob/master/DIPs/1NNN-RAC.md