Jump to page: 1 2
Thread overview
Wait-free MPSC and MPMC implement in D
May 08, 2018
manumaster
May 08, 2018
Dmitry Olshansky
May 08, 2018
David Nadlinger
May 09, 2018
Andy Smith
May 09, 2018
David Nadlinger
May 09, 2018
Andy Smith
May 09, 2018
Shachar Shemesh
May 09, 2018
Andy Smith
May 09, 2018
Jonathan M Davis
May 09, 2018
Shachar Shemesh
May 15, 2018
David Nadlinger
May 09, 2018
Shachar Shemesh
May 08, 2018
Is there some implement like this in D ?

https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/multilist-2017.pdf

May 08, 2018
On Tuesday, 8 May 2018 at 04:00:03 UTC, manumaster wrote:
> Is there some implement like this in D ?
>
> https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/multilist-2017.pdf

Look for Mecca by Wekka.io team.  It has great idustry-grade lock-free implementations for both. Not very flexible but these things usually are.

Can’t comment on wait-free property (source doesn’t claim it and I haven’t looked close enough to prove either way).

https://github.com/weka-io/mecca

May 08, 2018
On Tuesday, 8 May 2018 at 17:20:33 UTC, Dmitry Olshansky wrote:
> On Tuesday, 8 May 2018 at 04:00:03 UTC, manumaster wrote:
>> Is there some implement like this in D ?
>>
>> https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/multilist-2017.pdf
>
> Look for Mecca by Wekka.io team.  It has great idustry-grade lock-free implementations for both. Not very flexible but these things usually are.

Are you referring to https://github.com/weka-io/mecca/blob/master/src/mecca/containers/otm_queue.d?

These are SPMC and MPSC variants only, not MPSP.  It is also bounded-length (power-of-two sizes only), whereas the paper mentioned implements a linked-list based version.

The algorithm isn't wait-free (haven't thought too carefully about this, though), and should be used with a queue size much larger than the number of threads to make sure all of them make progress.

If you are considering to use it in your projects, definitely keep an eye on the real-world performance in the beginning (but then, if you weren't, you wouldn't be reaching for something like this anyway). There are a few interesting points about the algorithm in this respect; for example, the availability of queue space is checked with loads that use only raw memory order, as changes typically appear fast enough across cores on x86 anyway. If your use case is similarly enough to Weka's, it is indeed very highly optimised, though.

As far as industry-grade goes, note that many of the comments have not been updated since a previous combined implementation for SPMC/MPSC was split up into two types, so there is quite a bit of stale cruft around.

 — David


May 09, 2018
On Tuesday, 8 May 2018 at 22:09:37 UTC, David Nadlinger wrote:
> On Tuesday, 8 May 2018 at 17:20:33 UTC, Dmitry Olshansky wrote:
>> On Tuesday, 8 May 2018 at 04:00:03 UTC, manumaster wrote:
>>> Is there some implement like this in D ?
>>>
>>> https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/multilist-2017.pdf
>>
>> Look for Mecca by Wekka.io team.  It has great idustry-grade lock-free implementations for both. Not very flexible but these things usually are.
>
> Are you referring to https://github.com/weka-io/mecca/blob/master/src/mecca/containers/otm_queue.d?
>
> These are SPMC and MPSC variants only, not MPSP.  It is also bounded-length (power-of-two sizes only), whereas the paper mentioned implements a linked-list based version.
>
> The algorithm isn't wait-free (haven't thought too carefully about this, though), and should be used with a queue size much larger than the number of threads to make sure all of them make progress.
>
> If you are considering to use it in your projects, definitely keep an eye on the real-world performance in the beginning (but then, if you weren't, you wouldn't be reaching for something like this anyway). There are a few interesting points about the algorithm in this respect; for example, the availability of queue space is checked with loads that use only raw memory order, as changes typically appear fast enough across cores on x86 anyway. If your use case is similarly enough to Weka's, it is indeed very highly optimised, though.
>
> As far as industry-grade goes, note that many of the comments have not been updated since a previous combined implementation for SPMC/MPSC was split up into two types, so there is quite a bit of stale cruft around.
>
>  — David

What's MPSP? :-)

During Shachar's talk on the Saturday morning following the conclusion of Dconf he made it clear that the Mecca library is being used by the ~200klock Weka.io codebase ... a codebase which has very stringent latency *and* throughput requirements to satisfy customer needs in the storage space.

So if any D codebase has got bragging rights on the term 'industry-grade' I think this has to be one of them. So if they've copy/pasted a few comments ... meh ... I for one happy to let it slide. These guys have got bigger fish to fry :-)

Cheers,

A.






May 09, 2018
On Wednesday, 9 May 2018 at 00:20:39 UTC, Andy Smith wrote:
> What's MPSP? :-)

Whoops, MPMC, of course. ;) And that wasn't even the only typo; I should know better than to post while distracted…

> So if any D codebase has got bragging rights on the term 'industry-grade' I think this has to be one of them. So if they've copy/pasted a few comments ... meh ... I for one happy to let it slide. These guys have got bigger fish to fry :-)

Oh, it's definitely industry-grade in that it is used to great effect within Weka. It's just much more of an example for an interesting algorithm than it is for "mechanical" code quality.

By the way, if anyone ends up testing/benchmarking this on non-TSO CPUs, I'd be curious to hear about the results. I hope I got the memory order semantics right, but of course you don't necessarily see much of that on x86. It would also be interesting to see how bad the lack of fairness can get in synthetic test cases.

  — David
May 09, 2018
On Wednesday, 9 May 2018 at 01:30:09 UTC, David Nadlinger wrote:
> By the way, if anyone ends up testing/benchmarking this on non-TSO CPUs, I'd be curious to hear about the results.

Me too!

:-)

Cheers,

A.
May 09, 2018
On 08/05/18 07:00, manumaster wrote:
> Is there some implement like this in D ?
> 
> https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/multilist-2017.pdf 
> 
> 

It's two of Mecca's containers:

https://weka-io.github.io/mecca/docs/mecca/containers/otm_queue.html
May 09, 2018
On 09/05/18 03:20, Andy Smith wrote:
> 
> During Shachar's talk on the Saturday morning following the conclusion of Dconf he made it clear that the Mecca library is being used by the ~200klock Weka.io codebase ... a codebase which has very stringent latency *and* throughput requirements to satisfy customer needs in the storage space.
> 
> So if any D codebase has got bragging rights on the term 'industry-grade' I think this has to be one of them.

Let me start off by saying that it is great that people appreciate and enjoy Mecca. With that said, I would be wary of the direction this thread is threatening to take.

I have a motto in life - if you assume you're great, you most likely aren't. I have grown out of the "my code has no bugs" phase of my life quite a while ago.

The fact that code is used, to good effect and for a long period of time does not mean that it doesn't have problems, some of them might be quite fundamental. "This code is highly tested" is not a good answer to "I think I found a problem".

As such, I would kindly ask everyone on this list to not take offense at criticism aimed at me or at my code. I find comments such as David's highly constructive, and the last thing I'd want is for him, or anyone else, to be wary of voicing them.

I have not had the chance to parse David's critique to decide whether I agree with it or not. Regardless of my final conclusion, I think we should do everything we can to avoid suggesting it is illegitimate of him to make it.

> So if they've copy/pasted a few comments ... meh ... I for one happy to let it slide. These guys have got bigger fish to fry :-)

Perfection is a road you take, not a place you reach. All large repositories of code tend to have areas that are less visited. Mecca is no exception. I, for one, welcome being informed of where my code can be improved.



During the documentation phase, I made every effort to really understand the limitations of the code. Any code I was not sure about, I left out of the documentation (and there is a *lot* of undocumented areas in Mecca, some of which I suspect should not stay there).

So, my plea is this. If you find an area worth improving in Mecca, please don't hesitate to post. If you find you do hesitate, please please feel free to send me a private message.

I *want* to know of the problems.

Thank you,
Shachar
May 09, 2018
On 09/05/18 01:09, David Nadlinger wrote:
> The algorithm isn't wait-free (haven't thought too carefully about this, though)

This mirrors a discussion I had with Maor (who originally wrote it). Let's see if I bring you around the way I was brought around.

At the API level, there are two areas where the algorithm *cannot* be wait free. If a consumer tries to consume from an empty queue, or a producer tries to produce to a full one.

Those two aside (which, as I stated above, are not dependent on the implementation), the algorithm actually is wait free.

> As far as industry-grade goes, note that many of the comments have not been updated since a previous combined implementation for SPMC/MPSC was split up into two types, so there is quite a bit of stale cruft around.

Will definitely have a second look.

Shachar
May 09, 2018
On Wednesday, 9 May 2018 at 04:13:08 UTC, Shachar Shemesh wrote:
> On 09/05/18 03:20, Andy Smith wrote:
>> [...]
>
> Let me start off by saying that it is great that people appreciate and enjoy Mecca. With that said, I would be wary of the direction this thread is threatening to take.
>
> [...]

Well said, and re-reading apologies if the tone seemed a little hostile on last comment. That certainly wasn't the intention.

Cheers,

A.



« First   ‹ Prev
1 2