November 08, 2019
On Friday, 8 November 2019 at 15:42:40 UTC, Russel Winder wrote:
> Chapel has many things to teach most other programming languages about parallelism, especially on a truly multi-processor computer. Not least of which is partitioned global address space (PGAS).

Yeah, but it seems geared towards HPC scenarios and I wonder how their model will hold up when "home computers" move towards many cores with local memory.

I've got a feeling that some model reminiscent of actor based languages will take over at some point. E.g. something closer to Go and Pony, but with local memory baked in as a design premise.

Still, it is interesting that we now see pragmatic languages that are designed with parallell computing as a premise. So we now have at least 3 young ones that try to claim parts of this space: Chapel, Go and Pony. And they are all quite different! Which I can't really say about the non-concurrent languages; C++, D and Rust are semantically much closer than Chapel, Go and Pony are.
November 09, 2019
On Friday, 8 November 2019 at 02:04:07 UTC, Heromyth wrote:
> See https://blog.rust-lang.org/2019/11/07/Async-await-stable.html.

Here are two projects about this:
https://github.com/evenex/future
http://code.dlang.org/packages/dpromise

I make some improvements based on them.  Here is an example:

 void test05_03() {
        auto ex5a = tuple(3, 2)[].async!((int x, int y) {
            Promise!void p = delayAsync(5.seconds);
            await(p);
            return to!string(x * y);
        });

        assert(ex5a.isPending); // true

        auto r = await(ex5a);
        assert(r == "6");
    }
November 10, 2019
On Fri, 2019-11-08 at 15:57 +0000, Ola Fosheim Grøstad via Digitalmars-d
wrote:
[…]
> 
> Yeah, but it seems geared towards HPC scenarios and I wonder how their model will hold up when "home computers" move towards many cores with local memory.

Chapel and it's parallelism structures work just fine on a laptop.

> I've got a feeling that some model reminiscent of actor based languages will take over at some point. E.g. something closer to Go and Pony, but with local memory baked in as a design premise.

We were saying that in 1988, my research team and I even created a programming language, Solve – admittedly active objects rather than actors but in some ways the two are indistinguishable to most programmers. We even did a couple of versions of the model based on C++ in the early 1990s: UC++ and KC++. I am still waiting for people to catch up. I am not holding my breath, obviously.

> Still, it is interesting that we now see pragmatic languages that are designed with parallell computing as a premise. So we now have at least 3 young ones that try to claim parts of this space: Chapel, Go and Pony. And they are all quite different! Which I can't really say about the non-concurrent languages; C++, D and Rust are semantically much closer than Chapel, Go and Pony are.

Chapel and Pony are the interesting ones here. Chapel I believe can get traction since it is about using declarative abstractions to harness parallelism on a PGAS model. Pony I fear may be a bit too niche to get traction but it proves (existence proof) an important point about actors that previously only Erlang was pushing as an idea.

Go is not uninteresting, exactly the opposite since it is based on processes and channels, and effectively implements CSP. However far too many people using Go are failing to harness goroutines properly since they have failed to learn the lesson that shared memory multi-threading is not the right model for harnessing parallelism.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk



November 10, 2019
On Fri, 2019-11-08 at 09:57 +0000, Sebastiaan Koppe via Digitalmars-d wrote: […]
> Please have a look at the approach taken by structured concurrency. Recently mentioned on this forum by John Belmonte: https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb@forum.dlang.org
[…]

It is also worth remembering Reactive Programming

https://en.wikipedia.org/wiki/Reactive_programming

There was a lot of over exaggerate hype when it first came out, but over time it all settled down leading to a nice way of composing event streams and handling futures in a structured way.

gtk-rs has built in support for this that makes programming GTK+ UIs nice. It is an extra over GTK+ but should be seen as essential. It would be nice if GtkD could provide support for it. And yes it is all about event loops.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk



November 11, 2019
On 11/11/2019 12:56 AM, Russel Winder wrote:
> On Fri, 2019-11-08 at 09:57 +0000, Sebastiaan Koppe via Digitalmars-d wrote:
> […]
>> Please have a look at the approach taken by structured
>> concurrency. Recently mentioned on this forum by John Belmonte:
>> https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb@forum.dlang.org
> […]
> 
> It is also worth remembering Reactive Programming
> 
> https://en.wikipedia.org/wiki/Reactive_programming
> 
> There was a lot of over exaggerate hype when it first came out, but over time
> it all settled down leading to a nice way of composing event streams and
> handling futures in a structured way.
> 
> gtk-rs has built in support for this that makes programming GTK+ UIs nice. It
> is an extra over GTK+ but should be seen as essential. It would be nice if
> GtkD could provide support for it. And yes it is all about event loops.

Okay, now this is a concept that interests me.

It hits a lot closer to what I would consider is a good event loop implementation, even if my existing designs are not complete enough for it.

Any more resources I should take a look at?
November 10, 2019
On Sunday, 10 November 2019 at 12:10:00 UTC, rikki cattermole wrote:
> On 11/11/2019 12:56 AM, Russel Winder wrote:
>> [...]
>
> Okay, now this is a concept that interests me.
>
> It hits a lot closer to what I would consider is a good event loop implementation, even if my existing designs are not complete enough for it.
>
> Any more resources I should take a look at?

Take a look here: http://reactivex.io

There's also a D library inspired from that somewhere ...

November 10, 2019
On Sun, 2019-11-10 at 13:48 +0000, Paolo Invernizzi via Digitalmars-d wrote:
> On Sunday, 10 November 2019 at 12:10:00 UTC, rikki cattermole wrote:
> > On 11/11/2019 12:56 AM, Russel Winder wrote:
> > > [...]
> > 
> > Okay, now this is a concept that interests me.
> > 
> > It hits a lot closer to what I would consider is a good event loop implementation, even if my existing designs are not complete enough for it.
> > 
> > Any more resources I should take a look at?
> 
> Take a look here: http://reactivex.io
> 
> There's also a D library inspired from that somewhere ...

There are lots of implementations of the official ReactiveX API managed by this GitHub organisation:

https://github.com/ReactiveX

The implementation of the reactive idea in gtk-rs is a specialised one since the manager of the futures stream must integrate with the GTK event loop – there is no event loop for the futures, it is fully integrated into the GTK+ event loop.

A real-world example. In D to receive events from other threads and process them in the GTK+ thread I have to do:

    new Timeout(500, delegate bool() {
        receiveTimeout(0.msecs,
        (FrontendAppeared message) {
            addFrontend(message.fei);
        },
        (FrontendDisappeared message) {
            removeFrontend(message.fei);
        },
        );
        return true;
    });

which is not very event driven and is messy – unless someone knows how to do this better. Don't ask how to do this in C++ with gtkmm, you really do not want to know.

With Rust:

    message_channel.attach(None, move |message| {
        match message {
            Message::FrontendAppeared{fei} => add_frontend(&c_w, &fei),
            Message::FrontendDisappeared{fei} => remove_frontend(&c_w, &fei),
            Message::TargettedKeystrokeReceived{tk} => process_targetted_keystroke(&c_w, &tk),
        }
        Continue(true)
    });

which abstract things far better, and in a way that is comprehensible and yet hides the details.

The Rust implementation is handling more events since the D implementation is now archived and all the work is happening on the Rust implementation.

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk



November 10, 2019
On Sunday, 10 November 2019 at 10:57:19 UTC, Russel Winder wrote:
> Chapel and it's parallelism structures work just fine on a laptop.

Ok, maybe I've read the Chapel spec through a SGI-lens that has made me a bit biased there.  I will have to give Chapel a spin to figure it out, but quite frankly, right now the future of C++ seems more interesting than Chapel from a pragmatic application-programming concurrency viewpoint, with the upcoming concurrency-related extensions, stackless coroutines etc. I cannot really see myself using Chapel to build a desktop application. Maybe unjustified bias on my part, though.

>> I've got a feeling that some model reminiscent of actor based languages will take over at some point. E.g. something closer to Go and Pony, but with local memory baked in as a design premise.
>
> We were saying that in 1988, my research team and I even created a programming language, Solve – admittedly active objects rather than actors but in some ways the two are indistinguishable to most programmers. We even did a couple of versions of the model based on C++ in the early 1990s: UC++ and KC++. I am still waiting for people to catch up. I am not holding my breath, obviously.

Cool, I think really the hardware is the main issue, and "installed base" issues with existing applications requiring the current hardware model. So, lately speed has primarily come from special processors, GPUs, first as SIMD-style VLIW, now as many-core RISC to be more flexible.

So it might take time before we see "clusters" of simple CPUs with local memory. Maybe it will come through embedded. Maybe with automation/robotics, where you don't want the whole machine to fail just because a small part of it failed. But culture is a strong force... so the "not holding my breath" makes sense... :-/

> Chapel and Pony are the interesting ones here. Chapel I believe can get traction since it is about using declarative abstractions to harness parallelism on a PGAS model. Pony I fear may be a bit too niche to get traction but it proves (existence proof) an important point about actors that previously only Erlang was pushing as an idea.

Outside research languages, I agree. For supported languages Chapel and Pony are very interesting and worth keeping an eye on, even if they have very low adoption. You can also take their core ideas with you when programming in other languages.

> Go is not uninteresting, exactly the opposite since it is based on processes and channels, and effectively implements CSP. However far too many people using Go are failing to harness goroutines properly since they have failed to learn the lesson that shared memory multi-threading is not the right model for harnessing parallelism.

That is probably true. I've only used Go routines in the most limited trivial way in my own programs (basically like a future). There are probably other patterns that I could consider, but don't really think of.

Although, I don't really feel the abstraction mechanisms in Go encourage you to write things that could become complex... I haven't written enough Go code to know this for sure, but I tend to get the feeling that "I better keep this really simple and transparent" when writing Go code. It is a bit too C-ish in some ways (despite being fairly high level).



November 10, 2019
On Sun, 2019-11-10 at 14:42 +0000, Ola Fosheim Grøstad via Digitalmars-d
wrote:
[…]
> Ok, maybe I've read the Chapel spec through a SGI-lens that has made me a bit biased there.  I will have to give Chapel a spin to figure it out, but quite frankly, right now the future of C++ seems more interesting than Chapel from a pragmatic application-programming concurrency viewpoint, with the upcoming concurrency-related extensions, stackless coroutines etc. I cannot really see myself using Chapel to build a desktop application. Maybe unjustified bias on my part, though.

Chapel is very definitely a language for computationally intensive code, a replacement for Fortran (and C++). It has no pretensions to be a general purpose language. The intention has been to integrate well with Python so that Python is the language of the frontend and Chapel is the language of the computational backend – cf. CERN's view of C++ and Python. The first attempts at integration of Python and Chapel didn't work as well as hoped and were dropped. Now there are new ways of inter-working that shows some serious promise. One of these is Arkouda https://github.com/mhmerrill/arkouda  I haven't tried this yet, but I will have to if I decide to go to PyConUK 2020.

[…]
> Cool, I think really the hardware is the main issue, and "installed base" issues with existing applications requiring the current hardware model. So, lately speed has primarily come from special processors, GPUs, first as SIMD-style VLIW, now as many-core RISC to be more flexible.

In hindsight, what we were trying to do with programming languages in late 1980s and early 1990 was at least a decade too early – the processors were not up to what we wanted to do. A decade or a decade and half later and we would have had no problem. The issue was not processor cycles, it was functionality to support multi-threading at the kernel level, and fibres at the process level. If we had the money and the team today, I'd hope we would beat Pony, C++, D, Go, Rust, etc. at their own game. Ain't going to happen, but that's life.

> So it might take time before we see "clusters" of simple CPUs with local memory. Maybe it will come through embedded. Maybe with automation/robotics, where you don't want the whole machine to fail just because a small part of it failed. But culture is a strong force... so the "not holding my breath" makes sense... :-/

Intel had the chips in 2008, e.g. The Polaris Chip – cf. SuperComputer 2008 proceedings. But the experiment failed to be picked up for reasons that I have no idea of – possibly the chips were available a decade or more ahead of software developers ability to deal with the concepts.

[…]
> Although, I don't really feel the abstraction mechanisms in Go encourage you to write things that could become complex... I haven't written enough Go code to know this for sure, but I tend to get the feeling that "I better keep this really simple and transparent" when writing Go code. It is a bit too C-ish in some ways (despite being fairly high level).

Go implementation of CSP is not infallible, it is still possible to create livelock and deadlock – but you do have to try very hard, or be completely unaware of how message passing between processes over a kernel thread pool works. NB CSP doesn't stop you creating livelock or deadlock but it does tell you when and why it happens.

Go was intended to be a replacement for C, so if it feels C-ish the design has achieved success!

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk



November 11, 2019
On Sunday, 10 November 2019 at 21:49:45 UTC, Russel Winder wrote:
> pretensions to be a general purpose language. The intention has been to integrate well with Python so that Python is the language of the frontend and Chapel is the language of the computational backend – cf. CERN's view of C++ and Python.

Yes, that would make Chapel much more interesting. I think that being able to write libraries/engines in a new language and call into it from an established high-level language is a good general strategy.

Often times language authors see their language as the host language and other languages as "subordinate" library languages. That is probably a strategic mistake. It is kinda sad that so much open source library features have to be reimplemented in various languages.


> If we had the money and the team today, I'd hope we would beat Pony, C++, D, Go, Rust, etc. at their own game. Ain't going to happen, but that's life.

You could write up your ideas and create a blog post that describes it. I believe there is a reddit for people creating their own languages, could inspire someone?


> Intel had the chips in 2008, e.g. The Polaris Chip – cf. SuperComputer 2008 proceedings. But the experiment failed to be picked up for reasons that I have no idea of – possibly the chips were available a decade or more ahead of software developers ability to deal with the concepts.

Ah, interesting, Polaris used network-on-a-chip. According to some VLSI websites the Polaris led to Intel's many-micro-cores-on-a-chip architecture, which led to Larrabee and Xeon Phi. Last version of Phi was released in 2017.

Another type of many-micro-core processors with some local memory are the ones geared towards audio/video that use some kind of multiplexed grid-like crossbar-databusses for internal communication between cores. But they are more in the DSP tradition, so not really suitable for actor-style languages I think.


> between processes over a kernel thread pool works. NB CSP doesn't stop you creating livelock or deadlock but it does tell you when and why it happens.

Pony claims to be deadlock free: https://www.ponylang.io/discover/

I assume that you could still experience starvation-like scenarios or run out of memory as unprocessed events pile up, but I don't know exactly how they schedule actors.

> Go was intended to be a replacement for C, so if it feels C-ish the design has achieved success!

Yes… just like Php…

1 2
Next ›   Last »