June 14, 2022
On Tuesday, 14 June 2022 at 01:55:01 UTC, Mike Parker wrote:
>

the word default seems uncessary.

how about just:

module foo : @safe, nothow;

June 13, 2022
On 6/8/2022 5:04 PM, Mathias LANG wrote:
> `@safe` by default is only viable if `@safe` has minimum to no friction.
> This isn't the case *at all*. We have examples in the standard library itself.

I've converted a lot of old C code to D. @safe errors crop up regularly, but in nearly every case it is trivial to fix.

> It's also departing from D's identity as a system programming language which makes it trivial to interop with C / C++, alienating a sizeable portion of our community in the process.

On the other hand, people want their code to be more reliable. Buffer overflows are still the #1 problem in shipped C code.

June 14, 2022
On Tuesday, 14 June 2022 at 01:54:03 UTC, rikki cattermole wrote:
>
> There is no such thing as hardware assisted memory allocation.
>
> Memory allocators were data structures in 1980, and they still are today.
>

if I have a lot of spare memory, I cannot see the advantage of stack allocation over heap allocation.

i.e:

advantage of stack allocation (over heap allocation):
- no gc pauses
- better memory utilisation

these advantages become less relevant when plenty of spare memory is available, true? In this situation, the only advantage stack allocation would have over heap allocation, is that stack allocation is somehow faster than heap allocation.

But what would be the basis (evidence) for such an assertion?

Do we know this assertion to be true?
June 14, 2022
All heap memory allocations are expensive.

I cannot emphasize this enough.

Stack allocation costs one instruction, an add. That's it. Heap allocation cannot compete.
June 13, 2022

On 6/13/22 6:49 PM, Walter Bright wrote:

>

On 6/10/2022 5:46 AM, Steven Schveighoffer wrote:

>

I'm trying to parse this. You mean to say, there are enough unmarked extern(C) functions inside druntime, that fixing them all as they come up is not scalable? That seems unlikely. Note that modules like core.stdc.math has @trusted: at the top already.

It is not scalable for the reasons mentioned. Nobody has ever gone through windows.d to see what to mark things as, and nobody ever will.

It's done already. Nearly all the modules have @system: at the top. The few I checked that don't are just types, no functions.

> >

I will volunteer to mark any druntime extern(C) functions within a 2-day turnaround if they are posted on bugzilla and assigned to me. Start with @system: at the top, and mark them as the errors occur.

They aren't even done now, after 15+ years. See windows.d and all the the others.

They are mostly marked @system, with a smattering of @safe and @trusted.

I'll tell you what, I'll do a whole file at a time winsock32.d ...

OK, I did it in less than 10 minutes.

https://github.com/dlang/druntime/pull/3839

> >

If you think the "no-edits" bar is only cleared if extern(C) functions are assumed @safe, you are 100% wrong.

I'm not seeing how I am wrong.

import core.stdc.stdio;
void main() @safe {
   printf("hello world!\n"); // fails
}

You are saying that nobody has any unmarked D code that uses extern(C) functions that are already and correctly marked @system? I'm willing to bet 100% breakage. Not just like 99%, but 100% (as in, a project that has unmarked D code, which calls extern(C) functions, will have at least one compiler error).

Unless... you plan to remark files like core.stdc.stdio as @safe? I hope not.

> >

You can point the complaints about extern(C) functions at my ear, and deal with the significant majority of complaints that are about @safe by default D code.

That's a very nice offer, but that won't change that I get the complaints and people want me to fix it, not brush it off on you.

It's trivial:

user: hey, this function in core.sys.windows.windows looks like it should be safe?
Walter: it's probably just that we haven't marked it yet, file a bug and assign it to Steve.
Me: OK, I marked it and all the related functions as @trusted (10 minutes later)

  • or -
    Me: Sorry, that's not actually safe, please use a trusted escape.

There are 166 files in core/sys/windows. Each one where someone has a problem, I'll fix them in 10 minutes, that's 1660 minutes, or 28 hours of work (spread out over however long, if people find some interface that needs fixing). Less than a man-week. How does this not scale?

You need to learn to delegate! Especially for library functions, you aren't responsible for all of it!

> >

I would love to see a viable safe-by-default DIP get added.

At least we can agree on that!

Please, let's make it happen!

-Steve

June 14, 2022

On Tuesday, 14 June 2022 at 02:35:22 UTC, rikki cattermole wrote:

>

All heap memory allocations are expensive.

I cannot emphasize this enough.

Stack allocation costs one instruction, an add. That's it. Heap allocation cannot compete.

I don't even use new!

June 14, 2022
On Tuesday, 14 June 2022 at 02:04:58 UTC, forkit wrote:
> On Tuesday, 14 June 2022 at 01:55:01 UTC, Mike Parker wrote:
>>
>
> the word default seems uncessary.
>
> how about just:
>
> module foo : @safe, nothow;

Yeah, that works.
June 14, 2022
On Tuesday, 14 June 2022 at 02:35:22 UTC, rikki cattermole wrote:
> All heap memory allocations are expensive.
>
> I cannot emphasize this enough.
>
> Stack allocation costs one instruction, an add. That's it. Heap allocation cannot compete.

ok. just in terms of 'allocation', for an int, for example:

https://d.godbolt.org/z/Y6E668hEn

it's a difference of 4 instructions (with stack having the lesser amount)

then the question I have is, how much faster does it take for *my* cpu to do those 6 instructions, vs those 10 instructions.

that seems difficult to accurately determine - as a constant ;-)

So I presume the assertion that stack 'allocation' is faster than heap 'allocation', is based purely on the basis that there are 'a few less' instructions involved in stack allocation.

June 14, 2022
On 14/06/2022 4:24 PM, forkit wrote:
> So I presume the assertion that stack 'allocation' is faster than heap 'allocation', is based purely on the basis that there are 'a few less' instructions involved in stack allocation.

No.

The stack allocation uses a single mov instruction which is as cheap as you can get in terms of instructions.

You are comparing that against a function call which uses linked lists, atomics, locks and syscalls. All of which are pretty darn expensive individually let alone together.

These two things are in very different categories of costs.

One is practically free, the other is measurable.
June 14, 2022
On Tuesday, 14 June 2022 at 04:40:44 UTC, rikki cattermole wrote:
>
> No.
>
> The stack allocation uses a single mov instruction which is as cheap as you can get in terms of instructions.
>
> You are comparing that against a function call which uses linked lists, atomics, locks and syscalls. All of which are pretty darn expensive individually let alone together.
>
> These two things are in very different categories of costs.
>
> One is practically free, the other is measurable.

how do I can explain this result in godbolt(using the -O paramater to ldc2):

https://d.godbolt.org/z/hhT8MPesv

if I understand the output correctly (and it's possible I don't), then it's telling me that there is no difference, in terms of the number of intructions needed, to allocate an int on the stack vs allocating it on the heap - no difference whatsoever. I don't get it.

Is the outcome perculiar to just this simple example?