November 12, 2021
On Friday, 12 November 2021 at 01:05:15 UTC, Stanislav Blinov wrote:
> On Thursday, 11 November 2021 at 22:10:04 UTC, forkit wrote:
>
>> It's called 'staged learning'.
>>
>> Staged learning is the only way for humans to learn, due to the limitations of the human cognitive system. Specifically, the way short-term memory and long-term memory facilitate learning.
>>
>> Those who lack this understanding of how humans learn, tend to throw too much at novices.
>
> Like making a simple program do a bunch of extra work for literally no reason?
>
>> Also, this apparent drive towards requiring novices to understand the implications of their code, in terms of optimising the assembly that gets produced, is just nonsense. They'll never get pass the for loop!
>
> This has nothing to do with "optimising the assembly".

also, I think it's important not to conflate, code that doesn't perform well, with code that is not optimal.

it doesn't take very long as a developer, before you realise nobodys likes slow software ;-)  so performance will ALWAYS be front of mind, once you reach that point.

We don't need people telling us your code needs to written so that it performs well.

But even fast software is not necessarily optimised, and likely far from it in many scenarios... and getting worse it seems... which is ok, as long as faster devices keep rolling out I guess..

most people are developers (incl me), not programmers per se, and don't have the time to understand the ins and outs of how code executes on hardware, and how one library finction interacts with another.....etc... and so are not really in a position to optimise it. They are more interested in ensuring the solution they give to their users is not slow. That's what matters to them, and their users.

An app that use 3GB of your memory, when (if optimised) really could reduce that to 1GB, sounds great, in principle. But what effect will that optimised code have in terms of bugs and maintainance? So there are tradeoffs that do genuiely need to be considered, and circumstances differ.

btw. None of the solutions in this thread are slow..not by any means... and its certainly likely that none are optimal.

Hopefully the compiler and library are doing their very best to make it optimal ;-)



November 12, 2021
On Thursday, 11 November 2021 at 19:34:42 UTC, Stanislav Blinov wrote:
>
> Original C code from the first post can only fail on I/O, which is arguably out of your control. And the meat of it amounts to 10 conditional stores. Your implementations, in both C and D, are a very, very far distance away from that. Like I mentioned before, the whole algorithm can already complete even before a single `malloc` call starts executing. Not caring about *that* is just bad engineering.

Actually, that's an interesting comment, as it clearly indicates where we differ in perspective when we read that initial code.

The 'meat' of it, from my perspective, is 'there's someone using this code'.

So why is there no code to verify that input, and handle incorrect input in a user friendly manner.

The rest of the coding is kinda automatic reflex.. use a for loop.. or a switch..use an array or a vector..separate concerns into functions...I really don't care what or how you do that part (as long as it's not code I need to understand or maintain).

But don't give me a program that says enter a, b or c, but ultimately doesn't give a shi# what I enter.

Hence the 'addition's I made ;-)

I think your perspective, and mine (and others) are ALL valid.

November 13, 2021

On Thursday, 11 November 2021 at 22:30:12 UTC, forkit wrote:

>

On Tuesday, 2 November 2021 at 23:45:39 UTC, pascal111 wrote:

>

[...]

ok.. for a more on topic response..

First: Please name your variables sensibly:
 char negativity, even; // grrr!!!
 char answer1, answer2; // makes so much more sense

[...]

I touch that D is big language, it's not small like standard C. This will cost me much studying, so I think I need slow down and learn it step by step.

November 14, 2021
On Saturday, 13 November 2021 at 23:02:15 UTC, pascal111 wrote:
>
> I touch that D is big language, it's not small like standard C. This will cost me much studying, so I think I need slow down and learn it step by step.

Yes. C is so much smaller, and thus simpler (till you wanna do something complex)

But C also makes it 'simpler' to shoot yourself in the foot ;-)

Even with your simple C code, which is so easily tranposed to D, you already benefit from:

- variables being always initialised before use
(i.e no need to initalise i in your for loop, in D)

- array indices being automatically checked for out-of-bounds
(i.e. i < 10  ... in D.. it becomes.. numbers.length )

so you can make your simple code, even 'simpler' in D ;-)

November 16, 2021
On Sunday, 14 November 2021 at 04:29:53 UTC, forkit wrote:
> On Saturday, 13 November 2021 at 23:02:15 UTC, pascal111 wrote:
>>
>> I touch that D is big language, it's not small like standard C. This will cost me much studying, so I think I need slow down and learn it step by step.
>
> Yes. C is so much smaller, and thus simpler (till you wanna do something complex)
>
> But C also makes it 'simpler' to shoot yourself in the foot ;-)
>
> Even with your simple C code, which is so easily tranposed to D, you already benefit from:
>
> - variables being always initialised before use
> (i.e no need to initalise i in your for loop, in D)
>
> - array indices being automatically checked for out-of-bounds
> (i.e. i < 10  ... in D.. it becomes.. numbers.length )
>
> so you can make your simple code, even 'simpler' in D ;-)

Thanks! I won't hide my real intention. Learning C to learning D is like the case of doctor Jekyll and Mr. Hyde, if I'm in the place of Jekyll, I treat Hyde as a son and I want to teach him programming, and we can't do that with C because it belongs to vertical thinking, while D belongs to Hyde or the horizontal thinking.
1 2 3 4 5
Next ›   Last »