June 15, 2021

On Tuesday, 15 June 2021 at 10:04:03 UTC, Ola Fosheim Grøstad wrote:

>

So basically:

    coroutine(){
       int n = 0;
       while(true){
          n++;
          yield n;
          if (n > 100) {
            yield 0;
          }
       }
    }

can be distilled into something like this (unoptimized):

    struct {
        int n;
        int result;
        delegate nextstate = state1;
        state1() {
           n++;
           result = n;
           nextstate = state2;
        }
        state2() {
           if (n > 100) {
               result = 0;
               nextstate = state1;
           }
           else state1();
        }
    }

Reminds me a bit on CPS: https://github.com/zevv/cpsdoc

June 15, 2021

On Tuesday, 15 June 2021 at 13:25:23 UTC, Paulo Pinto wrote:

> >

The real problem is when the safe code is not fast enough and people rewrite it in an unsafe language.

Actually the real problem is that people think it is not fast enough, based on urban myths without touching any kind of profiling tools, or measuring it against the project hard deadlines.

I agree it's a factor. However in one case you can prove the myths wrong with real data, while in the other case the data is given as a reason to the low-level/unsafe route.

>

More often than not, it is already fast enough, unless we are talking about winning micro-benchmarks games.

Perhaps you feel the need to push back on some prevailing Reddit/Hackernews misconceptions, but I'm referring to real-world cases. For example, how is it possible, that e.g. on the same computer switching between two Slack channels takes 3-4 seconds, but at the same time runs demanding AAA games from from 2-3 years ago just fine? Unless we're living in different universes, you must have noticed the increasing quantity of bloatware apps are more slow and janky than ever, without any corresponding increase of functionality. I'm not saying that the use of a tracing GC is a problem or anything of the sorts. Often times, there's many small inefficiencies (each of which small enough that it's lost in the noise in profiler trace) that when taken as whole accumulate and make perceived user experience bad.

Also, I don't about you, but since you often talk about .NET and UWP, in the past I've worked full-time on WPF/SL/UWP control libraries and MVVM apps. For a purely app developer, the profiler often is really not that helpful when most of the CPU time time is spent in the framework after they've async-ified their code and moved all heavy computation out of the UI thread. Back then (before .NET Core was even a thing) I used to spend a lot of time using decompilers and later browsing referencesource.microsoft.com to understand where the inefficiencies lie. In the end, the solutions (as confirmed by both benchmarks and perceived application performance) were often to rewrite the code to avoid high-level constructs like DependencyProperty and even sometimes reach for big hammers like ILEmitter, to speed-up code that was forced to rely on runtime reflection.

In summary, when the framework dictates an inefficient API design (and also not really type-safe - everything is relying on dynamic casts and runtime reflection), the whole ecosystem (from third-party library developers to user-facing app writers) suffers. In the past several years MS has put a ton of effort into optimizing .NET Core under the hood, but often times the highest gains come from a more efficient APIs (e.g. otherwise why would they invest all this effort into value types, Spans, ref-returns, etc.).

June 15, 2021
On Tuesday, 15 June 2021 at 14:31:31 UTC, Dukc wrote:
> On Tuesday, 15 June 2021 at 13:25:23 UTC, Paulo Pinto wrote:
>>
>> Actually the real problem is that people think it is not fast enough, based on urban myths without touching any kind of profiling tools, or measuring it against the project hard deadlines.
>>
>> More often than not, it is already fast enough, unless we are talking about winning micro-benchmarks games.
>
> This is true, but what if the benchmarks show you do need to optimize? You need to avoid systemically slow designs in advance or you run the risk of having to rewrite the whole program.
>
> Now, you absolutely can design your program in [insert non-system language] so that it can be optimized as needed. But I believe Petar was arguing that certain kinds of inefficient iterator APIs discourage doing so.

Yes, that's a good summary.
June 15, 2021

On Tuesday, 15 June 2021 at 14:52:53 UTC, sighoya wrote:

>

Reminds me a bit on CPS: https://github.com/zevv/cpsdoc

Yes, continuation passing is an implementation strategy for high level languages.

C++ coroutines may be a bit different from what I presented though, the memory for saving state is reused like a stack frame. So if you have a variable "x" that isn't used after a certain point, then it can be reused for a variable "y"... Makes sense.

June 15, 2021
On 6/14/2021 7:29 AM, Steven Schveighoffer wrote:
> I wonder if there is room for hobbled iterators (cursors) to be in D in some capacity.

Have them be an index, rather than a pointer.

June 16, 2021

On Tuesday, 15 June 2021 at 16:21:07 UTC, Petar Kirov [ZombineDev] wrote:

>

real-world cases. For example, how is it possible, that e.g. on the same computer switching between two Slack channels takes 3-4 seconds, but at the same time runs demanding AAA games from from 2-3 years ago just fine? Unless we're living in different

This trend has been true since the 1980s where people wrote key routines in assembly. Meaning, most projects aim for usable at lowest price. More capable computers means less efficient programming...

I think a better argument is that system level programming requires predictable latency and high effiency to a greater extent.

For instance, if I create a solar powered monitoring system then I want to use a low powered cpu.

Walter says D is for systems programming. Well, then it follows that everything in the standard library should be designed for that purpose.

Meaning: predictable latency and highest efficiency, either by being fast OR consume minimal resources (energy or RAM).

If D does not follow through on that then it cannot be considered to be a dedicated system level language. But then D needs to define what it is for.

June 16, 2021

On Wednesday, 16 June 2021 at 07:47:18 UTC, Ola Fosheim Grostad wrote:

>

On Tuesday, 15 June 2021 at 16:21:07 UTC, Petar Kirov [ZombineDev] wrote:

>

real-world cases. For example, how is it possible, that e.g. on the same computer switching between two Slack channels takes 3-4 seconds, but at the same time runs demanding AAA games from from 2-3 years ago just fine? Unless we're living in different

This trend has been true since the 1980s where people wrote key routines in assembly. Meaning, most projects aim for usable at lowest price. More capable computers means less efficient programming...

That would explain it if the programs were as slow as before, but Petar also complained about programs that are even slower. There has to me more.

It could be survivourmanship bias. Perhaps we just don't remember the slower programs from the old days.

I think that computers (and internet connections) differ in power more than they used to. The alternative explanation might be that developers with powerful computers don't notice as easily how slow stuff they are making.

Then again, it also might be more competition as industry gets bigger => tighter deadlines and less slack => software getting done worse and worse.

>

Walter says D is for systems programming. Well, then it follows that everything in the standard library should be designed for that purpose.

Not everything. You're certainly familiar with the D principle for being BOTH a systems programming and application programming language. It follows that the standard library should cater for both cases, so large parts of the standard library should be handy for systems programming but not everything, because that would drive application programmers away.

And Phobos is on the right track. There are shortcomings, but nothing is fundamentally wrong from systems programming perspective. D ranges are totally usable at system level. I've used them myself when compiling to WebAssembly, where the D ecosystem is (currently) just as primitive as on some microcontroller.

June 16, 2021

On Wednesday, 16 June 2021 at 07:47:18 UTC, Ola Fosheim Grostad wrote:

>

On Tuesday, 15 June 2021 at 16:21:07 UTC, Petar Kirov [ZombineDev] wrote:

>

[...]

This trend has been true since the 1980s where people wrote key routines in assembly. Meaning, most projects aim for usable at lowest price. More capable computers means less efficient programming...

I think a better argument is that system level programming requires predictable latency and high effiency to a greater extent.

For instance, if I create a solar powered monitoring system then I want to use a low powered cpu.

Walter says D is for systems programming. Well, then it follows that everything in the standard library should be designed for that purpose.

Meaning: predictable latency and highest efficiency, either by being fast OR consume minimal resources (energy or RAM).

If D does not follow through on that then it cannot be considered to be a dedicated system level language. But then D needs to define what it is for.

Ironically Oberon, TinyGo, microEJ and .NET Nanoframework, Meadow fullfil that use case, while everyone keeps arguing what is D's killer use case.

Assuming you are happy with an ARM Cortex M0 class or Arduino like device.

June 16, 2021

On Wednesday, 16 June 2021 at 10:20:05 UTC, Dukc wrote:

>

That would explain it if the programs were as slow as before, but Petar also complained about programs that are even slower. There has to me more.

It could be survivourmanship bias. Perhaps we just don't remember the slower programs from the old days.

Yes, maybe. There might be other factors too:

  1. Business people are more present as leaders now (whereas it used to be engineers advancing to leadership positions). As a result engineering quality is not appreciated to the same level. My opinion. You see this with Apple products too, the surface stuff is being focused on in their presentations. Fancy surface, boring interior.

  2. People have become used to using web applications, so they have become used to latency (lag) when interacting with a system. Thus, the end users may not think about why an application is sluggish (as they don't understand the difference between slow code and network lag). So users expect less? This is especially true when you go to the 90s where Amiga users were predominantly geeks, and thus more demanding.

  3. Programmers have become less proficient and fail to think of how the code they write maps down through layers of libraries all the way down to CPU/GPU. Of course, more layers also makes this more difficult. I think many programmers now have the framework they use as their mental model, and not really the actual computer hardware.

>

I think that computers (and internet connections) differ in power more than they used to. The alternative explanation might be that developers with powerful computers don't notice as easily how slow stuff they are making.

Yep, developers should use the low end computers the application targets for executing their code.

>

Then again, it also might be more competition as industry gets bigger => tighter deadlines and less slack => software getting done worse and worse.

But more competition should lead to better quality... The problem with this (also for other sectors) is that for capitalism to work well for engineered products the consumer has to be knowledgable, demanding and be willing to shop around. But human beings tend to go with the flow (fashion, marketing, social peers) when they are making choices on things they have limited knowledge and interest in. Audio software is still pretty good, for instance. I assume this is because consumers in that market have a fairly good understanding of what to expect.

>

Not everything. You're certainly familiar with the D principle for being BOTH a systems programming and application programming language. It follows that the standard library should cater for both cases, so large parts of the standard library should be handy for systems programming but not everything, because that would drive application programmers away.

And Phobos is on the right track. There are shortcomings, but nothing is fundamentally wrong from systems programming perspective. D ranges are totally usable at system level. I've used them myself when compiling to WebAssembly, where the D ecosystem is (currently) just as primitive as on some microcontroller.

I understand where you are coming from, and I agree that D/C++ ranges can be used for system level programming (at least as a starting point).

I still think everything in the standard lib should be useful for system level programming, so you don't have to reconsider your options because of standard lib deficiencies half way through.

What is needed in addition to that is a standardized application level library built on top of the standard library. This could ship with the compiler and be defined in terms of APIs (so different compilers can optimize the implementation for their backend).

Or maybe even an application framework (with keyboard, mouse, graphics apis).

I think it is better with a more layered eco-system of standardized library APIs than one big monolithic standard library.

June 16, 2021

On Monday, 14 June 2021 at 01:48:18 UTC, Friendly Neighborhood Weirdo wrote:

>

This is a talk from the 2021 c++now conference: https://www.youtube.com/watch?v=d3qY4dZ2r4w

This is an interesting talk that is more a comparison of iteration abstractions than a language comparison, across 6 different abstractions.

It's a healthy reminder of just how complicated everything is in C++, often for no good reason.