November 11, 2021
On Wednesday, 10 November 2021 at 23:15:09 UTC, forkit wrote:
> On Wednesday, 10 November 2021 at 22:17:48 UTC, russhy wrote:
>> On Wednesday, 10 November 2021 at 06:47:32 UTC, forkit wrote:
>>> btw. My pc has 24GB of main memory, and my CPU 8MB L3 cache. So I really don't give a damn about allocations .. not one little bit ;-)
>>
>
>> Having the right mindset helps not make these mistakes in the future
>>
>> Changing habits is hard, make sure to train yourself to pick the right one, early if possible
>
> Umm.. you kinda missed the last part of my post...
>
> ..where I said..'Now if I were running a million processes across 1000's of servers, I probably would give a damn.'

Your CPU executes code speculatively, and in parallel. That's just the way it is, it's how the machine works. Therefore, you simply cannot afford to NOT think about that. You're just wasting your machine time if you're not caring about that. Consequently, you're wasting your time, someone else's time, and, well, money.
No, it's not a million processes across 1000s of servers, it's at least two processes across n cores, each of which has m ALUs. (At least two because the OS is involved). But it's two processes from you, two processes from me, four from the other guy, and pretty soon you have a monster that struggles to utilize even a tenth of your machine. All because "I'll care about it tomorrow".

> C'mon... nothing in my code was 'unacceptable' in terms of speed or efficiency.
>
> Making code transformations to improve speed and efficieny are important, but secondary. A newcomer cannot improve code that they do not understand ;-)

Yes, optimization is hard. Pessimization, though, is laughably easy, and should be avoided at all costs. "I don't care" is the number 1 reason for the atrocious situation with software that we're in right now. And people tend to come up with all sorts of ridiculous excuses, but most of them, distilled, do amount to "I don't care".
My phone is a thousand times faster, and has two million times more memory, than the thing that guided Apollo 11 to the Moon. And yet it sometimes takes longer to react to my keystrokes than roundtrip for a radio signal to the Moon *and* back.
My desktop is two times faster, and has eight times more memory, than my phone. But it consistently takes THREE TIMES the roundtrip of a radio signal to the Moon and back to start Visual Studio.
Heck, it will take longer than the roundtrip of a radio signal to the Moon and back to post this message after I hit "Send". And I'm reasonably certain this news server is not on the Moon. This all is, plainly, ridiculous.

> Picking understandable code first, IS right.

No. "Understability" is subjective. Show me some code in Scala or Fortran, I won't make heads or tails of it, even though it may be perfectly obvious to the next guy who also never saw either language. What's "right" is writing code that doesn't perform, or cause, any unnecessary work. It may not be fast, but at least it won't be slow for no reason.

> In any case, I say again, this thread is not about writing performance code per se, but about presenting code to new-comers, that they can make some sense of.

And how does invoking extra work help with that? Coupled with not even explaining what that extra work is and why it is there?

> Taking some C code, and writing/presenting it in D (or vica-versa) in such a way that you can longer make any sense of it, is kinda futile.

Original C code from the first post can only fail on I/O, which is arguably out of your control. And the meat of it amounts to 10 conditional stores. Your implementations, in both C and D, are a very, very far distance away from that. Like I mentioned before, the whole algorithm can already complete even before a single `malloc` call starts executing. Not caring about *that* is just bad engineering.

If you're writing C, or D, you're not writing pulp fiction for the masses. There are other languages for that.
November 11, 2021
On Thursday, 11 November 2021 at 00:11:07 UTC, H. S. Teoh wrote:

> It depends on what you're doing. In the OP's example, yeah worrying about allocations is totally blowing things out of proportions.

But that's the thing. How would one ever learn to know where that dividing line is if all the learning material they see teaches them the opposite - to not know or care?

'Twas a simple task: traverse an array and print numbers out of it based on a predicate. That is all the original program did. How does doing (LOTS) more than that make any of it easier to understand, let alone equivalent to execute?

forkit says: "if I was writing millions of processes across 1000s of servers..." No. Just... no. If I've only ever written "convenient" pessimized code, I won't magically start lighting up those wasted transistors just because I got a new job. I'll just be writing the same "convenient" pessimized code, only it would now span 1000s of servers. Which is the exact effing situation that we're in already!
November 11, 2021
On 11/11/21 11:34 AM, Stanislav Blinov wrote:

> Pessimization, though, is laughably easy, and
> should be avoided at all costs.

I am not passionate about this topic at all and I am here mostly because I have fun in this forum. So, I am fine in general.

However, I don't agree that pessimization should be avoided at all costs. I like D because it allows me to be as sloppy as I want to fix my code later on when it really matters.

> Not caring about *that* is just bad engineering.

There wasn't a problem statement with the original code. So, we understood and valued it according to our past experiences. For example, most of us assumed the program was about 10 integers but I am pretty sure that array was just an example and the program was meant to deal with larger number of arrays.

Another example is, you seem to value performance over maintainability because you chose to separate the selection letters without getting any programmatic help from any tool:

  write("Would you like in list (e=evens, o=odds, b=both)? ");
  readf(" %c", &even);

  if ((negativity == 'b') && (even == 'b'))

For example, the 'b's in that last line may be left behind unmodified if someone changes the help text alone. Someone may find that kind of coding "bad engineering" that shoul be avoided at all cost. (Not to defend myself but my associative array was exactly because I could not bring myself to separate those selection letters from the help text. I simply could not show unmaintainable code to a beginner.)

I don't think there is much more to see here: A simple C program, liberties taken for fun in D, pessimization is bad, unmaintainability is bad, yes.

Ali

November 11, 2021
On Thursday, 11 November 2021 at 21:13:03 UTC, Stanislav Blinov wrote:
> On Thursday, 11 November 2021 at 00:11:07 UTC, H. S. Teoh wrote:
>
>> It depends on what you're doing. In the OP's example, yeah worrying about allocations is totally blowing things out of proportions.
>
> But that's the thing. How would one ever learn to know where that dividing line is if all the learning material they see teaches them the opposite - to not know or care?

It's called 'staged learning'.

Staged learning is the only way for humans to learn, due to the limitations of the human cognitive system. Specifically, the way short-term memory and long-term memory facilitate learning.

Those who lack this understanding of how humans learn, tend to throw too much at novices.

Also, this apparent drive towards requiring novices to understand the implications of their code, in terms of optimising the assembly that gets produced, is just nonsense. They'll never get pass the for loop!

November 11, 2021
On Tuesday, 2 November 2021 at 23:45:39 UTC, pascal111 wrote:
> Next code originally was a classic C code I've written, it's pure vertical thinking, now, I converted it successfully to D code, but I think I made no much changes to make it has more horizontal thinking style that it seems D programmers care in horizontal thinking style. Is there any additions I can make in the code to make it suitable with D style or it's fine?
>

ok.. for a more on topic response..

    First: Please name your variables sensibly:
     char negativity, even; // grrr!!!
     char answer1, answer2; // makes so much more sense

    Second: You need to use D-style syntax in your array declaration
     int numbers[10] = [-3, 14, 47, -49, -30, 15, 4, -82, 99, 26]; // C style
     int[10] numbers = [-3, 14, 47, -49, -30, 15, 4, -82, 99, 26]; // D style

    Third: you're asking for trouble in your for loop declaration.
     for(int i = 0; i < 10; ++i) // grrr!! What if you counted up your elements incorrectly?
     for(int i = 0; i < sizeof(numbers) / sizeof(int); ++i) // is so much safer - in C style
     for(int i = 0; i < numbers.length; ++i) // is so much safer - in D style


Finally, D is multi paradigm. That's probably the first (and most important) thing you should know about D. Yes, you can write C style easy, and in many cases, that's just fine. You can also write in other styles.

But sometimes it is worthwhile rewriting code differently, to see what advantages, if any, comes about. D is a language where you can actually do just that. Above is a good (although very basic) example - i.e. change the way you define a for loop, so that it's safer, simpler, and more maintainable.

Also, use of lambdas, UCFS, etc.. (which is probably what you meant by 'horizontal' code), if used sensibly, can remarkably reduce and simplify code, as well as contributing to the maintainability of that code.

Thankfully, D is multiparadigm (which is its best and worst feature), and so it lets you can experiment with different styles. Try doing this in C, Go, Rust...

November 11, 2021
On Thursday, 11 November 2021 at 22:30:12 UTC, forkit wrote:
[...]
>      for(int i = 0; i < sizeof(numbers) / sizeof(int); ++i) // is so much safer - in C style

I made it even safer:

   for (int i = 0; i < sizeof numbers / sizeof *numbers; ++i)

Maybe the type of numbers is changed in the future. sizeof is an operator which needs parentheses around its argument only for types. sizeof (non-type-arg) alleges a binding that does not exist:

   #include <stdio.h>
   struct bar {
      long l1;
      long l2;
   };

   #define S(a) #a, (long unsigned) a

   int main ()
   {
      struct bar *b;
      printf ("%25s = %lu\n", S(sizeof(*b)));
      printf ("%25s = %lu\n", S(sizeof(*b).l1));
      printf ("%25s = %lu\n", S(sizeof(b)->l1));
      return 0;
   }

compiles (!) and gives:

               sizeof(*b) = 16
            sizeof(*b).l1 = 8
            sizeof(b)->l1 = 8


November 12, 2021
On Thursday, 11 November 2021 at 21:56:19 UTC, Ali Çehreli wrote:
> On 11/11/21 11:34 AM, Stanislav Blinov wrote:
>
> > Pessimization, though, is laughably easy, and
> > should be avoided at all costs.
>
> I am not passionate about this topic at all and I am here mostly because I have fun in this forum. So, I am fine in general.
>
> However, I don't agree that pessimization should be avoided at all costs. I like D because it allows me to be as sloppy as I want to fix my code later on when it really matters.

And when is that? And why is it not now? You mean prototyping? You can do that in any language, not sure what's special about D here. Sure, prototype away. No one gets everything right in a single keystroke.

> > Not caring about *that* is just bad engineering.
>
> There wasn't a problem statement with the original code. So, we understood and valued it according to our past experiences. For example, most of us assumed the program was about 10 integers but I am pretty sure that array was just an example and the program was meant to deal with larger number of arrays.

What difference does that make? You still don't want to do any unnecessary work, whether you're dealing with one puny array or a million of them.

> Another example is, you seem to value performance over maintainability because you chose to separate the selection letters without getting any programmatic help from any tool:

I didn't "choose" that. That's Siarhei Siamashka's version, fixed up to use `each` instead of `map!text.join`, because latter serves no useful purpose.

That is a weird comparison anyway. Performance OVER maintainability? There's nothing "more performant" or "less maintainable" in the input handling code, as it's the same as original.

>   write("Would you like in list (e=evens, o=odds, b=both)? ");
>   readf(" %c", &even);
>
>   if ((negativity == 'b') && (even == 'b'))
>
> For example, the 'b's in that last line may be left behind unmodified if someone changes the help text alone.

Let me see if I get this straight now... are you seriously talking about protecting against this program being altered by a goldfish? Because in that case, I'd best not write any code altogether. Someone someday may put an extra semicolon somewhere, the thing would stop compiling, and I'll be totally ruined.
...Perhaps let's actually stay on this planet?

> Someone may find that kind of coding "bad engineering" that shoul be avoided at all cost.

Sure. Someone definitely may. It'd certainly be nice to decouple input from logic. That doesn't take 14 times more code and AAs to do though. Just sayin' ;)

> (Not to defend myself but my associative array was exactly because I could not bring myself to separate those selection letters from the help text. I simply could not show unmaintainable code to a beginner.)

The original UI code is four lines. Four. Not fool-proof, input's not even validated. But four lines. Yours is fifty five, just to construct and present the UI and read input. Consequently, original program is about 14 times easier to maintain than yours. What did I miss?
November 12, 2021
On Thursday, 11 November 2021 at 23:41:48 UTC, kdevel wrote:
> On Thursday, 11 November 2021 at 22:30:12 UTC, forkit wrote:
> [...]
>>      for(int i = 0; i < sizeof(numbers) / sizeof(int); ++i) // is so much safer - in C style
>
> I made it even safer:
>
>    for (int i = 0; i < sizeof numbers / sizeof *numbers; ++i)
>

(void*)*numbers = NULL;

..don't you just love C ;-)

November 12, 2021
On Thursday, 11 November 2021 at 22:10:04 UTC, forkit wrote:

> It's called 'staged learning'.
>
> Staged learning is the only way for humans to learn, due to the limitations of the human cognitive system. Specifically, the way short-term memory and long-term memory facilitate learning.
>
> Those who lack this understanding of how humans learn, tend to throw too much at novices.

Like making a simple program do a bunch of extra work for literally no reason?

> Also, this apparent drive towards requiring novices to understand the implications of their code, in terms of optimising the assembly that gets produced, is just nonsense. They'll never get pass the for loop!

This has nothing to do with "optimising the assembly".
November 12, 2021
On Friday, 12 November 2021 at 01:05:15 UTC, Stanislav Blinov wrote:
> On Thursday, 11 November 2021 at 22:10:04 UTC, forkit wrote:
>
>> It's called 'staged learning'.
>>
>> Staged learning is the only way for humans to learn, due to the limitations of the human cognitive system. Specifically, the way short-term memory and long-term memory facilitate learning.
>>
>> Those who lack this understanding of how humans learn, tend to throw too much at novices.
>
> Like making a simple program do a bunch of extra work for literally no reason?
>

I think what you're missing here, is that this thread was about how to bring some C code into D, and do it some kinda 'D style'.

But this thread clearly demonstrates, there is no 'D style'.

Stop being so combative ;-)  let people be creative first, performance oriented second (or third.. or .....).

If we all wrote code the same way, we'd likely all be thinking the same way... and that would not be good.

If people want code that looks the same as everyone elses, and forces you to think in the same way as everyone else, they can go use Go  ;-)