April 05
On 05/04/2024 10:11 PM, Dukc wrote:
>     Temporal safety is about making sure one thread doesn't stomp all
>     over memory that another thread also knows about.
> 
>     So this is locking, ensuring only one thread has a reference to it,
>     atomics ext.
> 
>     Moving us over to this without the edition system would break
>     everyone's code. So it has to be based upon this.
> 
>     So the question of this thread is all about how do we annotate our
>     code to indicate its temporally safe and how does it map into older
>     editions view of what safe is. There is at least three different
>     solutions to this that I have come up with.
> 
> Isn't |shared| just for this? As far as I can tell, you can define a data structure struct that, when |shared|, allows multiple threads to access it, works from 100% |@safe| client code and doesn't allow any data races to happen.
> 
> Of course, actually implementing the data structure is challenging, just as it is for a dip1000-using reference counted data structure.

``shared`` doesn't provide any guarantees.

At best it gives us a story that makes us think that we have it solved. A very comforting story in fact. So much so people want to use it.

Which also happens to make it a very bad language feature that I want to see removed. Lies like this do not improve the D experience. They do not allow optimizations to occur.

See all the examples of people having to cast on/off ``shared`` to pass memory between threads. With temporal safety we'd have some sort of immutable reference that enables us to transfer ownership of an object across the function/threads. With guarantee that it isn't accessible by somebody else.

There should also be some determination if a function is temporally safe if it only accesses atomics, has been synchronized, only uses immutable references, or immutable data.

Note: immutable reference does not refer to the ``immutable`` type qualifier, but of a language feature where the reference to memory is limited to a single location.
April 05

On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

See all the examples of people having to cast on/off shared to pass memory between threads.

That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access shared data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (core.atomic).

But if you had the data structure struct, you wouldn't have to do either. It would have a shared constructor and shared member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.

April 05
On 05/04/2024 11:24 PM, Dukc wrote:
> On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> See all the examples of people having to cast on/off ``shared`` to pass memory between threads.
> 
> That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access `shared` data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (`core.atomic`).
> 
> But if you had the data structure struct, you wouldn't have to do either. It would have a `shared` constructor and `shared` member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.

You are assuming the data structure is the problem.

It isn't, the problem is who knows about the data structure, its arguments and its return value. Across the entire program, on every thread, in every global, in every function call frame.

``shared`` does not offer any guarantees to references, how many there are, on what threads its on. None of it. Its fully up to the programmer to void it if they chose to do so in normal ``@safe`` code.

If you cannot prove aliasing at compile time and in doing so enable optimizations, it isn't good enough for temporal safety.
April 05
On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew Cattermole wrote:
> See all the examples of people having to cast on/off ``shared`` to pass memory between threads. With temporal safety we'd have some sort of immutable reference that enables us to transfer ownership of an object across the function/threads. With guarantee that it isn't accessible by somebody else.

Sounds a lot like what I know as unique.

> [...] a language feature where the reference to memory is limited to a single location.

So, unique?
April 06
On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
> On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> See all the examples of people having to cast on/off ``shared`` to pass memory between threads. With temporal safety we'd have some sort of immutable reference that enables us to transfer ownership of an object across the function/threads. With guarantee that it isn't accessible by somebody else.
> 
> Sounds a lot like what I know as unique.
> 
>> [...] a language feature where the reference to memory is limited to a single location.
> 
> So, unique?

At the most basic level yes.

I'm simplifying it to not lock me into any specific behavior of references to the sub graph.

https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
April 06
On Saturday, 6 April 2024 at 06:12:21 UTC, Richard (Rikki) Andrew Cattermole wrote:
> On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
>> So, unique?
>
> At the most basic level yes.
>
> I'm simplifying it to not lock me into any specific behavior of references to the sub graph.
>
> https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/

Unique would be a great addition to have.

We still need shared, though. For when you want to have multiple threads refer to the same object.
April 06
On 06/04/2024 11:02 PM, Sebastiaan Koppe wrote:
> On Saturday, 6 April 2024 at 06:12:21 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
>>> So, unique?
>>
>> At the most basic level yes.
>>
>> I'm simplifying it to not lock me into any specific behavior of references to the sub graph.
>>
>> https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
> 
> Unique would be a great addition to have.
> 
> We still need shared, though. For when you want to have multiple threads refer to the same object.

I do want to see cross thread temporal safety also provided.
But I don't think it will be using shared to do it.

To make it all come together requires pieces that haven't got designs yet, so this would be the last stage that would need designing or at least that is how I've been working on it.
April 08

On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

On 05/04/2024 11:24 PM, Dukc wrote:

>

On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew Cattermole wrote:

>

See all the examples of people having to cast on/off shared to pass memory between threads.

That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access shared data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (core.atomic).

But if you had the data structure struct, you wouldn't have to do either. It would have a shared constructor and shared member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.

You are assuming the data structure is the problem.

It isn't, the problem is who knows about the data structure, its arguments and its return value. Across the entire program, on every thread, in every global, in every function call frame.

Why would that be a problem? A shared variable does not have to be a global variable. You can instance the shared data structure with new, or even as a local variable, and then pass references to it only to those threads you want to know about it.

>

shared does not offer any guarantees to references, how many there are, on what threads its on. None of it. Its fully up to the programmer to void it if they chose to do so in normal @safe code.

My impression is you can't do that, unless the data structure you're using is flawed (well dataStruct.tupleof of the works to bypass @safety but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?

April 09
On 09/04/2024 7:43 AM, Dukc wrote:
> On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew Cattermole wrote:
>> ``shared`` does not offer any guarantees to references, how many there are, on what threads its on. None of it. Its fully up to the programmer to void it if they chose to do so in normal ``@safe`` code.
> 
> My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass `@safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?

```d
void thread1() {
	shared(Type) var = new shared Type;
	sendToThread2(var);

	for(;;) {
		// I know about var!
	}
}

void sentToThread2(shared(Type) var) {
	thread2(var);
}

void thread2(shared(Type) var) {
	for(;;) {
		// I know about var!
	}
}
```

The data structure is entirely irrelevant.

Shared provided no guarantees to stop this.

No library features can stop it either, unless you want to check ref counts (which won't work cos ya know graphs).

You could make it temporally safe with the help of locking, yes. But shared didn't contribute towards that in any way shape or form.
April 08
On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew Cattermole wrote:
>
> On 09/04/2024 7:43 AM, Dukc wrote:
>> On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew Cattermole wrote:
>>> ``shared`` does not offer any guarantees to references, how many there are, on what threads its on. None of it. Its fully up to the programmer to void it if they chose to do so in normal ``@safe`` code.
>> 
>> My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass `@safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?
>
> ```d
> void thread1() {
> 	shared(Type) var = new shared Type;
> 	sendToThread2(var);
>
> 	for(;;) {
> 		// I know about var!
> 	}
> }
>
> void sentToThread2(shared(Type) var) {
> 	thread2(var);
> }
>
> void thread2(shared(Type) var) {
> 	for(;;) {
> 		// I know about var!
> 	}
> }
> ```
>
> The data structure is entirely irrelevant.
>
> Shared provided no guarantees to stop this.
>
> No library features can stop it either, unless you want to check ref counts (which won't work cos ya know graphs).
>
> You could make it temporally safe with the help of locking, yes. But shared didn't contribute towards that in any way shape or form.

I don't think this has much to do with shared. You can get in similar situations on a single thread, just not in parallel.

Solving it requires tracking lifetimes, independent of whether that is across threads.

D has very little in the way of an answer here. It has the GC for auto lifetime, smart pointers and then shoot-in-the-foot manual memory management.

Well, and there is scope with dip1000 of course, which works surprisingly well but requires phrasing things in a more structured manner.

Curious to what you have been cooking.