November 22, 2019
On Wednesday, 20 November 2019 at 06:40:50 UTC, Ola Fosheim Grøstad wrote:
> But maybe it could be configured in a more granular manner than on/off.

Do you have any ideas on that?

One nice thing is that the struct doesn't need to expose an element by ref unless the user needs to pass it as a ref argument to some function.

Instead, the struct can define opIndexAssign and opIndexOpAssign, which don't need to bump the reference count. I probably didn't get that far with my proof of concept RCSlice.
November 22, 2019
On Friday, 22 November 2019 at 15:32:14 UTC, Nick Treleaven wrote:
> On Wednesday, 20 November 2019 at 06:40:50 UTC, Ola Fosheim Grøstad wrote:
>> But maybe it could be configured in a more granular manner than on/off.
>
> Do you have any ideas on that?

There are so many options... but if set aside "debug-only ref-counting" then this could be an possible "performance philosophy":

I thought about something yesterday regarding bounds checks. I think it would be nice if one could state a code section should be @high_performance and then get warnings everywhere safety checks had survived all the optimization passes. Then one would have to mark those bounds checks as "necessary" to get suppress the warning. So that you basically can write safe code that is performant and focus time and energy where it matters.

If one had ARC optimization like Swift, then maybe something similar could be done for reference counting. In a @high_performance section one would get warnings if the refcount is changed and would have to do explict "borrow statements" to suppress the warnings. Then one could later figure out a way to move those out of performance sensitive code (like loops).

> One nice thing is that the struct doesn't need to expose an element by ref unless the user needs to pass it as a ref argument to some function.
>
> Instead, the struct can define opIndexAssign and opIndexOpAssign, which don't need to bump the reference count. I probably didn't get that far with my proof of concept RCSlice.

Another interesting thing about having RCSlices is that you might find a concurrency use case where you could put write locks on only certain parts of an array and have safe read/write access to different slices of the same array without error prone manual locking... not sure exactly how that would work out in terms of performance, but it could at least be useful in debug mode.


November 22, 2019
On Friday, 22 November 2019 at 16:09:51 UTC, Ola Fosheim Gr wrote:
> On Friday, 22 November 2019 at 15:32:14 UTC, Nick Treleaven wrote:
>> On Wednesday, 20 November 2019 at 06:40:50 UTC, Ola Fosheim Grøstad wrote:
>>> But maybe it could be configured in a more granular manner than on/off.
>>
>> Do you have any ideas on that?
>
> There are so many options... but if set aside "debug-only ref-counting" then this could be an possible "performance philosophy":
>
> I thought about something yesterday regarding bounds checks. I think it would be nice if one could state a code section should be @high_performance and then get warnings everywhere safety checks had survived all the optimization passes. Then one would have to mark those bounds checks as "necessary" to get suppress the warning. So that you basically can write safe code that is performant and focus time and energy where it matters.
>
> If one had ARC optimization like Swift, then maybe something similar could be done for reference counting. In a @high_performance section one would get warnings if the refcount is changed and would have to do explict "borrow statements" to suppress the warnings. Then one could later figure out a way to move those out of performance sensitive code (like loops).
> 
You mean the optimization that gets slammed by state of the art tracing GC implementations?

https://github.com/ixy-languages/ixy-languages

If one cares about performance there are only two paths, linear and affine type systems with their complexity for typical every day coding, or tracing GCs.

Reference counting GC are just the easy way to get automatic memory management, and when one adds enough machinery to make them compete with tracing GC in performance, they end up being a tracing GC algorithm in disguise.



November 22, 2019
On Friday, 22 November 2019 at 17:54:53 UTC, Paulo Pinto wrote:
> You mean the optimization that gets slammed by state of the art tracing GC implementations?

You can get the exact same performance from ARC as from Rust, because the concept is the same... but Swift is built on Objective-C's implementation. What is wrong here is to reference a specific implementation.

> If one cares about performance there are only two paths, linear and affine type systems with their complexity for typical every day coding, or tracing GCs.

Actually, no. That "benchmark" didn't really tell me anything interesting. It might be interesting for people wanting to implementing network drivers in a managed language, which is worrisome  for a large number of reasons... (I mean, why would you implement a tiny program of 1000 lines using a GC...)

> Reference counting GC are just the easy way to get automatic memory management, and when one adds enough machinery to make them compete with tracing GC in performance, they end up being a tracing GC algorithm in disguise.

No, then you end up with C++ unique_ptr...

November 22, 2019
On Friday, 22 November 2019 at 18:35:48 UTC, Ola Fosheim Grøstad wrote:
> No, then you end up with C++ unique_ptr...

The point is, if you constrain the language sufficiently, what can be expressed in it and do deep enough analysis then you end up right there.

It isn't really possible to make broad statement about memory management without considering the data-structure you need to represent, maintain, evolve...


1 2
Next ›   Last »