November 13, 2021
On Saturday, 13 November 2021 at 01:12:32 UTC, H. S. Teoh wrote:

> That's not what I was talking about, though.  I was talking about programs that load in 1 second or less -- in which case trying to optimize startup code is a waste of time.

It isn't. One second is four billion cycles. Or eight, sixteen, thirty two... if you're actually running multi-core. 60 full frames of smooth animation. Or gigabytes and gigabytes of data transferred. If you're not up and running in under a dozen milliseconds (i.e. one frame), you're slow. And if you are, you don't need any silly splash screens.

> If you're taking 8 seconds to start up, that *is* a performance problem and you should be optimizing that!

Yeah, tell that to Microsoft ;) To them, 10s is fine. And they're not exactly outliers. Photoshop, Gimp, Krita, Maya, 3D Max, literally any video editor... Browsers, don't get me started on those.
This very web service takes like 5 full seconds to post a message. There was a time when 5 seconds wasted time online was A LOT.

> Given that I run my Vim sessions for years at a time, startup time *is* completely irrelevant to me. :-D  If your program is such that people have to constantly restart it, something is bogonous with your design and you *should* consider scrapping it and doing it better.

There may be other reasons for restarting. Such as voluntary or mandatory breaks, security regulations. If something is irrelevant to you does not necessarily mean it's irrelevant to the next guy.
Bottom line is, there's very little software that actually *has* to start slow. And the majority of what does start slow today isn't that.

> But you took what I said completely out of context.

I don't think I have.

> I was talking about optimizing the code where the profiler indicates an actual bottleneck; 90% of your code is not part of a bottleneck and therefore "optimizing" it prematurely is a waste of time, not to mention it makes for needlessly unreadable and unmaintainable code.

You don't optimize prematurely. It's often enough to just not pessimize for no reason. Using a fixed-size array or a static_vector instead of dynamic one here. Using a table instead of a bunch of math there. Or vice versa, depending. Doing a simple cursory assessment for a possibly better algorithm. Making one syscall instead of two. Something as trivial as aligning your data. Do you really need this ".array" call here, or can you do with a ".copy" into existing storage? Maybe layout your array differently and let multiple threads at it?..

> (An 8-second startup time *is* a bottleneck, BTW. Just so we're on the same page.)  If code that isn't part of a bottleneck is allocating and waiting for GC collection, who cares??

??? Everybody that comes after you cares! If Atila can make his thing run in 1µs, but *chose* to leave it at 1ms, because *to him*, anything slower than 20ms doesn't count as slow, he just flat out denied 999µs to someone else. Which can be centuries to that someone. There is no good reason not to do everything you can to stop your code from being slow. To get your data out that much sooner. To let the CPU go that much sooner. Either for other processes, or so that it can go to sleep and save that power.

> It's not the bottleneck, you should be spending your time fixing actual bottlenecks instead.  That's my point.

Of course you should. But that doesn't mean that everywhere else you should be lazy. If it doesn't matter to you, be sure it does to someone who executes after you. Or someone who's sitting in front of the screen, or on the other end of the wire across the world. Or heck, to yourself: whether you'll have enough juice to make that important phone call on the weekend.

> Just the very act of writing it in D has already given me a major speed boost, I'm not gonna sweat over squeezing microseconds out of it unless I'm also planning to run it in a tight loop 1,000,000 times per second.

Why not? Why wouldn't you? Ain't gettin' paid for that? I mean, it's a reason. Not a very good one, IMO, but a reason. But if not that, then what?

> And seriously, "ultra-optimized" code wastes more time than it

I'm not talking about "ultra-optimized" code.

> saves, because it makes code unreadable and unmaintainable, and the poor fool who inherits your code will be wasting so much more time trying to understand just what the heck your code is trying to do than if you had written it in an obvious way that's slightly slower but doesn't really matter anyway because it's not even part of any bottleneck.

It does matter. Why do you keep insisting that it doesn't? It does. Just because it's not a bottleneck doesn't mean it's fine to just let it be slow. It's wasted cycles. No, you shouldn't necessarily "ultra-optimize" it. But neither you should just sign off on a sloppy work.

> All those hours added up is a lot of wasted programmer wages which could have been spent actually adding business value, like y'know implementing features and fixing bugs instead of pulling out your hair trying to understand just what the code is doing.
>
> Do the rest of the world a favor by keeping such unreadable code out of your codebase except where it's actually *necessary*.

No disagreements here.


November 13, 2021

On Saturday, 13 November 2021 at 02:53:12 UTC, zjh wrote:

>

On Saturday, 13 November 2021 at 01:12:32 UTC, H. S. Teoh wrote:

>

Everybody's gone.

Although D's Metaprogramming is beautiful, with a 'GC',I'd rather use a macro language.
Why there is no GC in the new language? Doesn't everyone see that d is biten by the GC?

November 13, 2021

On Saturday, 13 November 2021 at 03:02:30 UTC, zjh wrote:

>

Why there is no GC in the new language? Doesn't everyone see that d is biten by the GC?

Which language do you compete with when GC is holding you back?

November 13, 2021
On 13.11.21 01:29, Paul Backus wrote:
> On Friday, 12 November 2021 at 22:09:28 UTC, Andrei Alexandrescu wrote:
>> On 2021-11-12 16:14, Timon Gehr wrote:
>>> You still need language support. Reaching mutable data through an immutable pointer violates transitivity assumptions.
>>
>> Indeed, but not indirectly. You can soundly access a pointer to mutable data if you use the immutable pointer as a key in a global hashtable.
> 
> I guess the question reduces to: what is the simplest thing you can do to a pointer that will make the compiler forget its provenance?
> 
> Adding an offset is clearly not enough. Casting it to an integer, taking its hash, and using that hash to look up a value in an associative array is probably *more* than enough. Is there something in between that's "just right"?
> 
> One possibility is casting the pointer to an integer, and then immediately casting it back:
> 
>      immutable(int)* p;
>      size_t n = cast(size_t) (p - 1);
>      int* q = cast(int*) n;
>      *q = 123;
> 
> Assume we know, somehow, that the memory location pointed by q is allocated and contains a mutable int. Does the mutation in the final line have undefined behavior?
> 
> The answer depends on whether integers have provenance or not--a question which remains unsettled in the world of C and C++. [1] If we decide that integers should *not* have provenance in D, then the above code is valid, and we do not need to add any new features to the type system to support it.
> ...

It may also depend on whether you are in a pure function or not.
November 13, 2021
On 12.11.21 23:09, Andrei Alexandrescu wrote:
> On 2021-11-12 16:14, Timon Gehr wrote:
>> On 12.11.21 18:44, H. S. Teoh wrote:
>>> Yes.  So actually, this*could*  be made to work if the RC payload is
>>> only allowed to be allocated from an RC allocator.  The allocator would
>>> allocate n+8 bytes, for example, and return a pointer to offset 8 of the
>>> allocated block, which is cast to whatever type/qualifier(s) needed.
>>> Offset 0 would be the reference count.  The copy ctor would access the
>>> reference count as *(ptr-8), which is technically outside the
>>> const/immutable/whatever payload.  When the ref count reaches 0, the
>>> allocator knows to deallocate the block starting from (ptr-8), which is
>>> the start of the actual allocation.
>>
>> You still need language support. Reaching mutable data through an immutable pointer violates transitivity assumptions.
> 
> Indeed, but not indirectly. You can soundly access a pointer to mutable data if you use the immutable pointer as a key in a global hashtable.

Actually, you can't.

On 08.11.21 22:42, Andrei Alexandrescu wrote:
> - work in pure code just like T[] does

November 13, 2021
On 11/12/21 10:15 PM, Timon Gehr wrote:
> On 12.11.21 23:09, Andrei Alexandrescu wrote:
>> On 2021-11-12 16:14, Timon Gehr wrote:
>>> On 12.11.21 18:44, H. S. Teoh wrote:
>>>> Yes.  So actually, this*could*  be made to work if the RC payload is
>>>> only allowed to be allocated from an RC allocator.  The allocator would
>>>> allocate n+8 bytes, for example, and return a pointer to offset 8 of the
>>>> allocated block, which is cast to whatever type/qualifier(s) needed.
>>>> Offset 0 would be the reference count.  The copy ctor would access the
>>>> reference count as *(ptr-8), which is technically outside the
>>>> const/immutable/whatever payload.  When the ref count reaches 0, the
>>>> allocator knows to deallocate the block starting from (ptr-8), which is
>>>> the start of the actual allocation.
>>>
>>> You still need language support. Reaching mutable data through an immutable pointer violates transitivity assumptions.
>>
>> Indeed, but not indirectly. You can soundly access a pointer to mutable data if you use the immutable pointer as a key in a global hashtable.
> 
> Actually, you can't.
> 
> On 08.11.21 22:42, Andrei Alexandrescu wrote:
>> - work in pure code just like T[] does

Indeed if you add the purity requirement you can't, thanks.

November 19, 2021
On Monday, 8 November 2021 at 22:15:09 UTC, deadalnix wrote:
> On Monday, 8 November 2021 at 21:42:12 UTC, Andrei Alexandrescu wrote:
>> 1. We could not make reference counting work properly in pure functions (a pure function does not mutate data, which goes against manipulating the reference count).
>>
>
> I don't think this one is a real problem, as one can cast a function call to pure and go through this. Dirty as hell, but done properly it should work even in the face of compiler optimization based on purity. GCC or LLVM would have no problem optimizing this scafolding away.
>
>> 2. Qualifiers compound problems with interlocking: mutable data is known to be single-threaded, so no need for interlocking. Immutable data may be multi-threaded, meaning reference counting needs atomic operations. Const data has unknown origin, which means the information of how data originated (mutable or not) must be saved at runtime.
>>
>
> shared_ptr does atomic operation all the time. The reality is that on modern machines, atomic operation are cheap *granted there is no contention*. It will certainly limit what the optimizer can do, but all in all, it's almost certainly better than keeping the info around and doing a runtime check.
>
>> 3. Safety forced the use of trusted blocks all over. Not a showstopper, but a complicating factor.
>>
>> So if there are any takers on getting RCSlice off the ground, it would be very interesting.
>
> Yes, similar to pure, a bit anoying, but workable.
>
> I think however, you missed several massive problems:
> 4. Type qualifier transitivity. const RCSlice!T -> const RCSlice!(const T) conversion needs to happen transparently.
> 5. Head mutability. const RCSlice!(const T) -> RCSlice!(const


It's a copy of the head. ".hidup" or hdup. Head-only duplication.


> T) conversion needs to happen transparently.
> 6. Safety is impossible to ensure without a runtime check at every step, because it is not possible to enforce construction in D at the moment, so destrcutors and copy constructor/postblit always need to do a runtime check for .init .
>
> I believe 4 and 5 to be impossible in D right now, 6 can be solved using a ton of runtime checks.


1) mutable <--> shared immutable
2) inout containers
3) locked unlocked

1---

2---
Container do not to modify the possibly mutable elements.
Same code for mutable and immutable.

container!inout


container!Mutable and container!immutable could have different code, different
memory layout, any conversion needs to rearrange that.

3---
//typeof(obj) == "locked TypeName"

synchronize(obj)
{
  //typeof(obj) == "TypeName "
}

//typeof(obj) == "locked TypeName"

?how to keep shared and unshared separate but still reuse the same code?
---














December 14, 2021
On Monday, 8 November 2021 at 21:42:12 UTC, Andrei Alexandrescu wrote:

> So if there are any takers on getting RCSlice off the ground, it would be very interesting.

Well, let's explore what we can about what we break and try to figure out how can we improve the language so that we don't have to break.
Anyone interested, please submit your comments, fixes, tests, etc. etc. etc...

https://github.com/radcapricorn/dlang-rcslice
December 14, 2021
On Friday, 12 November 2021 at 21:14:28 UTC, Timon Gehr wrote:
> You still need language support. Reaching mutable data through an immutable pointer violates transitivity assumptions.

D does not have well-defined provenance semantics.  We need to decide:

1) what provenance semantics do we want;

2) how do we implement those provenance semantics; and

3) how do we implement rc given those semantics

Ideally 1 should be decided with an eye towards 2 and 3.  I think it would be reasonable to define provenance semantics which permit you to derive a mutable pointer from an immutable one (though not in @safe code).  I think it would also be reasonable to define provenance semantics under which you cannot perform such a derivation.

But that's something we have to decide, not take for granted.
December 14, 2021
On Tuesday, 14 December 2021 at 06:27:27 UTC, Elronnd wrote:
> I think it would be reasonable to define provenance semantics which permit you to derive a mutable pointer from an immutable one

Simple example:

struct S { immutable float x; int y; }
S a;
immutable float* x = &a.x;
int y = *cast(int*)(cast(ubyte*)&x + (S.y.offsetof - S.x.offsetof))

above should be UB if (cast(ubyte*)&x + (S.y.offsetof - S.x.offsetof)) does not actually point to mutable memory.  This is subtly different from:

int x;
const int *p = &x;
*cast(int*)p = 5;

but still similar enough to warrant concern; which is why I think it would also be reasonable to make the former form illegal.