October 05, 2021
On Tuesday, 5 October 2021 at 14:31:15 UTC, David Gileadi wrote:
> On 10/5/21 6:17 AM, Dukc wrote:
>> On Monday, 4 October 2021 at 20:42:06 UTC, David Gileadi wrote:
>>>
>>> This message was a reference type containing protected values.
>> 
>> Huh, what did you try to do? I tested the snippet before posting and for me, it worked.
>
> Sorry, it was my poor attempt at wordplay. Please ignore.

I got the joke anyway 😅
October 05, 2021
On Tue, Oct 05, 2021 at 08:22:48AM +0000, Max Samukha via Digitalmars-d wrote:
> On Monday, 4 October 2021 at 22:15:33 UTC, Walter Bright wrote:
> 
> > This is plasticity, the opposite of brittleness.
> 
> To me, that's just another case of abstraction. D's '.' abstracts the details of '.' and '->'. That naturally leads to plasticity - ability to swap concrete things without affecting the abstract interface.

It's more than just '.' vs '.' and '->', though. That's just one of the smaller details that lead to plasticity.

Other factors include type inference: by using type inference where the code doesn't really depend on a specific concrete type, you allow that type to be swapped for another one later on without needing to change that part of the code.  This is particularly powerful in UFCS chains: if you had to implement the UFCS chain in C, for example, you'd have to rewrite a whole bunch of types, variable declarations, etc., every time you need to do something like insert a new component into the chain, or reorder the chain.  That makes refactoring the equivalent code in C an onerous task, which naturally incentivizes people *not* to do such a refactoring in C.  In D, thanks to type inference, it takes just a few seconds to perform this refactoring.  That opens the door to refactoring your program in ways you normally don't do in C, and much more frequently too.

Another factor is built-in unittests: if your code has an adequate set of unittests, you're far more likely to do larger-scale refactorings, because the unittests give you confidence that any glaring mistakes would be instantly caught.  In C, unless you have a solid unittesting framework in place (how many C projects have you seen that has this? IME, it's in the far minority), you'd have no confidence at all that your refactoring wouldn't introduce new bugs, esp. subtle bugs that will come back to bite you in the worst possible ways.  This factor doesn't stand by itself; D code tends to be more testable thanks to incentives to write things like UFCS chains instead of deeply-nested loops.  As a result, if properly used, unittests tend to be more thorough than an external testsuite as is common in C projects that have a testsuite (unfortunately, most C projects don't even have one).  Which in turn leads to higher confidence level that you won't introduce bugs during refactoring.

Template functions with DbI also adds to D's plasticity: by using static if's to discover properties of incoming types and adapting to them accordingly, a function can retain the same external API while increasing in functionality.  As I described in my other post with the example of a serialization function, DbI allows the caller to remain unchanged in the face of changing types and changing requirements like excluding certain fields from serialization, or different serialization strategies for different types.  In the equivalent C code, you'd have to write a different serialization function per type, or use error-prone APIs that pass in void* and type sizes (not to mention nested information for nested types -- the complexity just explodes). And every time you switch the type being serialized, you'd have to change every callsite that passes that type.  To handle things like non-serialized fields, you'd have to hard-code stuff into the serialization functions or pass unwieldy structures like lists or hashtables of stuff to exclude, stuff that need special treatment, etc..  It's a lot of tedious (and error-prone!) busywork just to do a refactoring that in D constitutes just a few lines of code change.

So naturally, in C you'd rather avoid such refactorings, preferring instead to keep the existing design so as to avoid breaking things unintentionally or spending too much time refactoring stuff that already works.  In D, you're freed from a lot of such concerns, so are more likely to perform deep refactorings that change the original design in more drastic ways.


T

-- 
Question authority. Don't ask why, just do it.
October 05, 2021
On Tuesday, 5 October 2021 at 17:53:46 UTC, H. S. Teoh wrote:
> This is particularly powerful in UFCS chains: if you had to implement the UFCS chain in C, for example, you'd have to rewrite a whole bunch of types, variable declarations, etc., every time you need to do something like insert a new component into the chain, or reorder the chain.  That makes refactoring the equivalent code in C an onerous task,

This is all great, but C is too weak a competitor for it to be persuasive to someone who needs persuading.

Here's a remark on brittleness in Rust:

>I’ve seen this many times in Rust: I write a large chunk of code being a little careless about lifetimes, just to then have to change the types of my variables and functions everywhere in my program to avoid having to clone() things everywhere to make it work.

from https://renato.athaydes.com/posts/how-to-write-slow-rust-code.html
October 05, 2021

On Monday, 4 October 2021 at 23:10:27 UTC, Tejas wrote:

>

Even the d-idioms website doesn't have much.

Any text sources would be really appreciated.

Thank you for reading!

The construction of the allocator library in https://www.youtube.com/watch?v=LIb3L4vKZ7U is really quite the lesson.

Also "Functional image processing in D": https://blog.cy.md/2014/03/21/functional-image-processing-in-d/

With all the good thing that can be said about DbI, I don't think it is a defining feature of D. After using ae.utils.graphics for years, I turned 180° and now think generic programming typically get a few problems:

  • it's a bit remote from problem domains, meaning a bit less readable code, a bit harder to write, a bit longer to compile... in exchange for the expanded capabilities and genericity
  • the problem of having less specific identifiers
  • the problem of typically having poor information-hiding. Probably the idea was that the software artifact is so generic, it has to be made public. Leading to too much being public.
October 05, 2021
On Tuesday, 5 October 2021 at 20:00:42 UTC, jfondren wrote:
> On Tuesday, 5 October 2021 at 17:53:46 UTC, H. S. Teoh wrote:
>> [...]
>
> This is all great, but C is too weak a competitor for it to be persuasive to someone who needs persuading.
>
> Here's a remark on brittleness in Rust:
>
>>[...]
>
> from https://renato.athaydes.com/posts/how-to-write-slow-rust-code.html

Interesting link about Rust. I think D could/should speak to those wanting to be more productive 🍀
October 05, 2021
On 10/5/2021 10:53 AM, H. S. Teoh wrote:
> Another factor is built-in unittests: if your code has an adequate set
> of unittests, you're far more likely to do larger-scale refactorings,
> because the unittests give you confidence that any glaring mistakes
> would be instantly caught.
This is why D's test suite is so crucial. We could never improve D without it.

The *lack* of a test suite for Optlink is what is killing it. It being all written in assembler isn't the issue.
October 06, 2021
On Tuesday, 5 October 2021 at 20:00:42 UTC, jfondren wrote:
> On Tuesday, 5 October 2021 at 17:53:46 UTC, H. S. Teoh wrote:
>> This is particularly powerful in UFCS chains: if you had to implement the UFCS chain in C, for example, you'd have to rewrite a whole bunch of types, variable declarations, etc., every time you need to do something like insert a new component into the chain, or reorder the chain.  That makes refactoring the equivalent code in C an onerous task,
>
> This is all great, but C is too weak a competitor for it to be persuasive to someone who needs persuading.
>

C has the whole market of UNIX clones, Khronos standards, and embedded for itself, where C++ after 30 years trying hardly managed to make dent.

October 06, 2021

On Tuesday, 5 October 2021 at 14:46:16 UTC, Patrick Schluter wrote:

>

On Tuesday, 5 October 2021 at 01:33:45 UTC, Paul Backus wrote:

>

Of course, this is not really true in practice. In languages that don't support DbI, what actually happens is that you do not write all 2^N customized versions of the code. Instead, you give up on having individual customized versions for each use-case and write a single version based on some lowest-common-denominator abstraction. What you really lose here are the benefits to performance and expressiveness that come from having individually-customized versions of your code.

What really happens in other languages is either proliferation of code (copy/paste) or branching at runtime (if cascades, booleans or switch/case).
The D way may end up adding more code, but code with less branching.

a thing I notice from the C code where I work is that there is a lot of copy paste when things are done in a hurry - a thing that imo happens a lot less with D because of its wide set of compile time features.

October 06, 2021

On 10/4/21 6:15 PM, Walter Bright wrote:

>

This is plasticity, the opposite of brittleness.

What are your experiences with this?

For the most part, it's great!

There is one place where I have struggled though, and D might be able to do better. And that is with optional parentheses and taking the address. When a property can be either an accessor or a field, then obj.prop can work the same. However, &obj.prop is not the same.

I have solved it by using this stupid function:

auto ref eval(T)(auto ref T t) { return t; }

// instead of &obj.prop
auto ptr = &eval(obj.prop);

I just came across this workaround in my code and since I wrote it a long time ago it puzzled me "what is this eval function?". Took me a minute to remember why I did that.

While it's nice D has a mechanism to work around this difference, I find having to use such shims a bit awkward. And it's a direct consequence of hiding the implementation of a field/property behind the same syntax. You can find other cases where D can be awkward (such as typeof(obj.prop) or checking types regardless of mutability).

What is the "better" answer though? I don't know.

-Steve

October 06, 2021

On Wednesday, 6 October 2021 at 15:59:46 UTC, Steven Schveighoffer wrote:

>

On 10/4/21 6:15 PM, Walter Bright wrote:

>

This is plasticity, the opposite of brittleness.

What are your experiences with this?

For the most part, it's great!

There is one place where I have struggled though, and D might be able to do better. And that is with optional parentheses and taking the address. When a property can be either an accessor or a field, then obj.prop can work the same. However, &obj.prop is not the same.

I have solved it by using this stupid function:

auto ref eval(T)(auto ref T t) { return t; }

// instead of &obj.prop
auto ptr = &eval(obj.prop);

I just came across this workaround in my code and since I wrote it a long time ago it puzzled me "what is this eval function?". Took me a minute to remember why I did that.

While it's nice D has a mechanism to work around this difference, I find having to use such shims a bit awkward. And it's a direct consequence of hiding the implementation of a field/property behind the same syntax. You can find other cases where D can be awkward (such as typeof(obj.prop) or checking types regardless of mutability).

What is the "better" answer though? I don't know.

-Steve

This is something I agree with. I often come across small snippets of code that I've made and I sometimes can't remember why I did something like it, until I try it and then it's like "oh yeah it was to work around this thing"