November 17
On Wednesday, 17 November 2021 at 07:54:19 UTC, forkit wrote:
> On Wednesday, 17 November 2021 at 06:50:52 UTC, Paulo Pinto wrote:
>>
>> Google and Linux kernel don't care about your opinion and have started introducing Rust into the Linux kernel.
>>
>> AUTOSAR for high integrity computing doesn't care about your opinion and now mandates C++14 as the language to use on AUTOSAR certified software, ISO 26262 road vehicle functional safety.
>>
>> Arduino folks don't care about your opinion and Arduino libraries are written in C++, they also have Rust and Ada collaborations.
>>
>> C was closer to PDP-11 Assembly, it is hardly closer to any modern CPU.
>
> Well, clearly those examples demonstate that my opinion has some merit ;-)
>
> Also, many of C's so called problems, are really more library problems. You can't even do i/o in C without a library.
>
> also. you kinda left out alot.... like all the problem domains where C is still the language of choice... even to this day.
>
> I mean, even Go was originally written in C. It seems unlilkely they could have written Go, in Go.

Go was written in C, because the authors decided to reuse Plan 9 compilers they created in first place, instead of starting from scratch. That is all, nothing special about C other than saving time.

Currently, Go is written in Go, including its GC implementation.

F-Secure has their own baremetal Go implementation, TamaGo, written in Go and sold for high security firmware.
November 17
On Wednesday, 17 November 2021 at 01:23:45 UTC, H. S. Teoh wrote:
> Years ago, before @nogc was implemented, people were clamoring for it, saying that if we could only have the compiler enforce no GC use, hordes of C/C++ programmers would come flocking to us.
>
> Today, we have @nogc implemented and working, and the hordes haven't come yet.


@nogc gave those who understand system level programming a signal of direction. I don't remmber people demanding it, IIRC Walter just did it. Nobody said it was significant.

> Now people are clamoring for ref-counting and getting rid of GC use in Phobos.  My prediction is that 10 years later, we will finally have ref-counting and Phobos will be @nogc, and the hordes of C/C++ programmers still will not come to us.

C++ has grown since then. D has chosen to waste all resources on @safe and @live and not change. Thus C++ is too far ahead.

D should have gone with the actor model. D should have gotten rid of global GC scanning.

But D has not chosen any model and tries to do everything, which isn't possible. D tries to change without changing. That leads to bloat.

Phobos suffers from bloat. The compiler suffers from bloat. The syntax suffers from bloat. Andrei seems to think that D should follow C++'s idea of simplifying by addition. That is a disaster in  making. C++ cannot change, D is not willing to use that to its advantage.

Bloat is the enemy of change. D has chosen to tweak the bloat instead of reducing it. That leads to more bloat and less change. ImportC is added to a compiler that should have been restructured first. Good luck with refactoring the compiler now, SDC might be the only hope...

The global GC strategy with raw pointers and integrated C interop is one massive source for "runtime bloat". C++ also suffers from bloat, but has critical mass, and that is enough to keep it alive.

D is competing with Zig and Nim. They are leading. They have less bloat AFAIK. Refocus. Forget C++ and Rust.

D should pick a memory model  and runtime strategy that scales and do it well!

Global GC does not scale when combined with C semantics. That has always been true and will remain true. That is an undeniable fact.

If D is to be competitive something has to change. Adding bloat wont change anything.



November 17

On Wednesday, 17 November 2021 at 02:10:02 UTC, jmh530 wrote:

>

I'm confused by this because it seems as if the managed C++ iterations from Microsoft do not have much traction. What is the benefit of different types for GC/non-GC pointers?

Managed C++ is now named C++/CLI and it is probably still there if you want to use it. Not many use C++/CLI and I suspect that people simply use C# instead as it is a much better alternative for most cases.

The benefit of a special type for managed pointers is that you can change the implementation of the GC fairly easily as well as incorporate metadata under the hood. Tracing GC is not suitable for low latency programs/embedded, but reference counting can be a viable alternative for the low latency programs.

November 17

On Wednesday, 17 November 2021 at 07:50:20 UTC, Tejas wrote:

>
  • ARC seems to be a pipe dream for some reason;

It is a pipe dream because there is no plan to restructure the compiler internals.

Until that happens a solid ARC implementation is infeasible (or rather much more expensive that the restructuring costs).

What is missing is basic software engineering. The boring aspect of programming, but basically a necessity if you care about cost and quality.

November 17

On Wednesday, 17 November 2021 at 02:32:21 UTC, H. S. Teoh wrote:

>

With a GC, you instantly eliminate 90% of these problems. Only 10% of the time, you actually need to manually manage memory -- in inner loops and hot paths where it actually matters.

GC phobia is completely irrational and I don't see why we should bend over backwards to please that crowd.

T

I tell you a story :)

I came from C# so not a GC phobic at all. It's a different mindset compared to hardcore C/C++ devs. (just get the shit done using some of so many libraries out there).

What I liked (and still like) that D allowed me to do is become more low level, more performant, but still be very productive. D code also is ofter much shorter and easier to understood (rust makes my eyes bleed).

GC allows that and is great for. And I must admit that D had broken me in a way that I don't want to use higher level languages anymore. I've learned a lot using D through the years.

BUT:

  • have you tried to write a shared D lib used from some other language from multiple threads? I know that you must register/unregister threads in GC, but it just hasn't work for me reliably in any way and you would have to track the lifetime of the thread in the calling language - not pleasant experience at all, no tutorials of how to do that properly that actually works - it's some years old experience now so maybe something has changed
  • as GC is stop the world kind, only way to make it to not intervene with you and still use it in other places is make a thread (with a @nogc function) that is not registered in the GC and make some own mechanism to exchange data between GC and @nogc threads (as std.concurrency won't help you here)
  • GC won't magically stop the leaks. Nowadays one want's to have a long running service that just works. But try that with a 'flagship' vibe-d framework and you probably get this experience
    • I don't like much when GC.stats reports something like: 239MB free from 251MB used memory that is a lot of unused space in a microservice world (and we had a cases when OS just OOM killed the service as it just grows indefinitely regardles there is a majority of free space in GC - as GC.stats says)
    • and now figure that out -> after that experience I would rather use asan than GC with no tools helping to figure that out
    • we have somehow managed to set GC properties in a way that it doesn't grow that much and get rid of a lot of small allocations, but with a cost you wouldn't expect using the GC
      • one of the cases that caused a lot of unnecessary small allocations was something like this row["count"].as!long when reading the columns from a database. Seems like a totally normal way right? But there is much more to it. As it (dpq2 library) uses libpq internally that addresses columns by their index, it uses C method with char* column name to get it and so is using toStringz that allocates, for every single use of that column for every single row being read. You can imagine where it goes handling some complex queries on thousands of rows. And that is not something that a programmer like 'original me' wants to care about, he just wants to use the available libraries and get the work done, that is what GC should help with right?
    • there are leaks caused by druntime bugs itself (for example https://issues.dlang.org/show_bug.cgi?id=20680)

After those and some other experiences with GC I just became a bit GC phobic (I'm ok with GC for CLI tools, scripts, short running programs, no problem with that there) and try to avoid it as much as I can. But when you want to get shit done you can't write all on your own, but use the libraries that get you there with no much hassle between.

Overall my 2 cents on D state:

  • druntime relies too much on the GC
    • no Fiber usable in @betterC or @nogc
    • no Thread usable in @betterC or @nogc
    • etc.
    • I just think that basic blocks we built on should be as low level as possible to be generally usable
  • druntime and phobos has many extern(C) or normal functions that aren't @nogc albeit they can be (but is's getting better with each release thanks to various contributors that cares as much as at least report it) - but look at codebases of mecca or vibe-d where they use their own extern(C) redefinition due to this, or mecca has assumeNoGC template to workaround missing @nogc attribute
  • std.experimental.allocator
    • still in experimental
    • not usable in @betterC
    • shouldn't generally standard library interface use the allocators so that caller can actually choose the way it allocates?
  • preview switches that would stay in preview forever (ie fieldwise)?
  • no async/await - it's hard to find a modern language without it, D is one of them and there doesn't seem to be any interest in it by the leadership (it would deserve a workgroup to work on it)
    • but I'm afraid even if it would potentially be added to the language it would still use the GC as GC is great..
  • pure tooling compared to others - I'm using VSCode in linux (sorry I'm lazy to learn vim in a way I'd be effective with it), it somewhat works, but code completion breaks too often for me (I'm used to it over the years, but I can imagine it doesn't look good to newcomers)
  • dub and code.dlang.org doesn't seems to be too official, and being cared about
  • it's hard to tell anyone that GC in D is fine when you look at techempower benchmark and searching the vibe-d (or anything in D) somewhere up where it should be and isn't (event other GC languages are much higher there)
  • betterC seems to becoming an unwanted child and is a minefield to use - see bugs
  • I think there are 2 sorts of group in D community - one more low level that won't like to use GC much, and GC 'likers', for whom GC is just 'good enough'
    • I'm afraid that there can't be consensus of what D should look as those groups has different priorities and points of view
    • preferred memory model differs for them too and I'm not sure if it's possible in D to make both sides happy (and without breaking changes)
  • most libraries on code.dlang.org are high level, and mostly when you want to use betterC or avoid GC, you are on your own. That is a problem when you just want to use some component and be done (if there is no C alternative or it would mean to write a more idiomatic wrapper for it).
November 17

On Wednesday, 17 November 2021 at 10:59:19 UTC, IGotD- wrote:

>

On Wednesday, 17 November 2021 at 02:10:02 UTC, jmh530 wrote:

>

I'm confused by this because it seems as if the managed C++ iterations from Microsoft do not have much traction. What is the benefit of different types for GC/non-GC pointers?

Managed C++ is now named C++/CLI and it is probably still there if you want to use it. Not many use C++/CLI and I suspect that people simply use C# instead as it is a much better alternative for most cases.

It is mostly used to consume those COM APIs that Windows team keeps doing only for C++ consumption and are harder to get straight with plain P/Invoke, or RCW/CCW.

>

The benefit of a special type for managed pointers is that you can change the implementation of the GC fairly easily as well as incorporate metadata under the hood. Tracing GC is not suitable for low latency programs/embedded, but reference counting can be a viable alternative for the low latency programs.

PTC and Aicas are in business for the last 25 years doing real time GC for embedded.

It is a matter of who's on the team,

"Hard Realtime Garbage Collection in Modern Object Oriented Programming Languages."

https://www.amazon.com/Realtime-Collection-Oriented-Programming-Languages/dp/3831138931/

Basically the foundation background for the Aicas product, the thesis written by one of the founders,

"Distributed, Embedded and Real-time Java Systems"

https://link.springer.com/book/10.1007/978-1-4419-8158-5

Given that D is still in the philosophical search of it wants to double down on GC or not, such optimizations aren't possible.

November 17
On Wednesday, 17 November 2021 at 12:14:46 UTC, tchaloupka wrote:
> * it's hard to tell anyone that GC in D is fine when you look at techempower benchmark and searching the vibe-d (or anything in D) somewhere up where it should be and isn't (event other GC languages are much higher there)

I'm pretty sure you're the one who benchmarked my cgi.d and it annihilated vibe.d in that test.

Maybe it isn't the GC and vibe is just poorly written?

November 17

On Wednesday, 17 November 2021 at 13:44:39 UTC, Adam D Ruppe wrote:

>

Maybe it isn't the GC and vibe is just poorly written?

Make the required language changes and make the GC fully precise. In the cloud you care more about memory usage than computational speed. Your whole instance might boot on 256-512MiB. GC is ok, but you need to be able to reclaim all memory.

November 17

On Wednesday, 17 November 2021 at 10:59:19 UTC, IGotD- wrote:

>

[snip]
The benefit of a special type for managed pointers is that you can change the implementation of the GC fairly easily as well as incorporate metadata under the hood. Tracing GC is not suitable for low latency programs/embedded, but reference counting can be a viable alternative for the low latency programs.

Thanks. I now remember that this might have come up before.

I get the idea that tracing GC and reference counting are useful in different programs. However, I understand that it is possible to switch out D's GC, though that may not be so easy. Could you explain a bit more how having two different pointer types helps with switching out the GC?

Also, suppose std.allocator gets put in Phobos. We can currently use the gc_allocator, would it be possible to also create an rc_allocator? Is the issue that the pointer of gc_allocator is a normal pointer, but rc_allocator would need one wrapped with additional metadata?

November 17
On Wednesday, 17 November 2021 at 13:50:59 UTC, Ola Fosheim Grøstad wrote:
> Your whole instance might boot on 256-512MiB.

I've had no trouble running normal D on that right now.

Though one thing that would be nice is this function:

https://github.com/dlang/druntime/blob/master/src/core/internal/gc/os.d#L218

Notice the Linux implementation....... lol.
1 2 3 4 5 6 7 8 9 10 11 12 13 14