October 23, 2017
On Monday, 23 October 2017 at 11:02:41 UTC, Martin Nowak wrote:
> On Monday, 23 October 2017 at 06:05:50 UTC, drug wrote:
>> 20.10.2017 17:46, Martin Nowak пишет:
>> My 2 cent:
>> 1. dub needs ability to work with other repository than standard ones.
>
> You mount or clone whatever you want and use `dub add-local`.
>
>> 2. multicore building - entire project in D builds faster than C++ one (I have two implementation of the same), but in case of incremental building C++ faster.
>
> I always assumed this to be the main point why people are asking for a better build tool, but it's not sth. dub can do atm. It's a limitation of the compiler that requires various changes and a slight redesign of our build model.

Does it? Reggae can do parallel per-package builds (in fact, the default) right now.

> In C++ incremental rebuilds are simple as you compile each file individually anyhow, but that's the crux for why C++ compilations are so slow in the first place.

Not really. C++ is slow to compile anyway because parsing it is slow. C has the same compilation model and is much much faster to compile than C++. There's also the fact that the same headers are parsed over and over again. `#include <iostream>` requires parsing thousands of lines of code from dozens of files.

However, if all you touched was a C++ implementation file, then incremental builds are faster than D. Really.

> Compiling multiple modules at once provides lots of speedups as you do not have to reparse and analyze common/mutual imports, but on the downside it cannot be parallelized that well.

I measured, and indeed compiling per package is faster than compiling all of the modules that have to be recompiled individually, even if done in parallel. Weirdly enough this is a disadvantage (time-wise) of having implementation and declarations in the same file.

> Dub could parallelize building individual packages/sub-packages (target + dependencies) right now though.

reggae already does, just point it at a dub package directory. By default it'll even build the default target and a unittest one in parallel.

>
>> 3. dub single build mode does not caches builds so it builds entire project every time.
>
> Could you please file an issue with a test case for that.
> Why do you use single build mode in the first place?

I'd assume it'd be to only rebuild the necessary files C++-style. However, and as stated by you above, that's usually slower anyway.

Atila
October 23, 2017
On Monday, 23 October 2017 at 11:39:58 UTC, Martin Nowak wrote:
> On Monday, 23 October 2017 at 11:23:18 UTC, Guillaume Piolat wrote:
>> Not anymore, you can use the "export" keyword for Windows (eg with LDC >= 1.2).
>
> With what semantic?
>

We used to require .def files, and now use "export" instead on Windows.
Works on DMD (not sure since what version) and LDC since https://github.com/ldc-developers/ldc/pull/1856
Windows only and free functions only.

>> Every-symbol-public-by-default in Posix is annoying though :)
>
> We agreed on hidden visibility by default for everything that's not exported.
> This requires export to be fixed on non-Windows machines first.

This is especially interesting since hidden visibility for most symbols is required to make -dead_strip effective (strips most of the object code here).


> By any means, if someone wants to help here, get in touch with Benjamin Thaut and me.
> This has been lingering around for way to long, and Benjamin alone has a hard time pushing this.

Would Bountysource help be adequate?

October 23, 2017
On Friday, 20 October 2017 at 09:49:34 UTC, Adam Wilson wrote:
> Others are less obvious, for example, async/await is syntax sugar for a collection of Task-based idioms in C#.

Now I think it's doesn't fit D. async/await wasn't made for performance, but for conservation of thread resources, async calls are rather expensive, which doesn't fit in D if we prefer raw performance. Also I found another shortcoming: it doesn't interoperate well with cache: cache flip flops between synchronous and asynchronous operation: when you hit cache it's synchronous, when you miss it it performs IO.
October 23, 2017
On Monday, 23 October 2017 at 11:21:13 UTC, Martin Nowak wrote:
> On Saturday, 21 October 2017 at 18:52:15 UTC, bitwise wrote:
>> On Wednesday, 18 October 2017 at 08:56:21 UTC, Satoshi wrote:
>>
>>> async/await (vibe.d is nice but useless in comparison to C# or js async/await idiom)
>>
>>
>>> Reference counting when we cannot use GC...
>>
>>
>> If I understand correctly, both of these depend on implementation of 'scope' which is being worked on right now.
>
> Scope is about preventing pointer escaping, ref-counting also needs to make use-after-free safe which is currently in the early spec phase.

FYI, in cousin (imperative, GC'ed, systems PL) language Nim, there's currently some similar discussion around how to make seq and string GC free:

https://nim-lang.org/araq/destructors.html

so these efforts in D are timely.

October 23, 2017
On Monday, 23 October 2017 at 11:02:41 UTC, Martin Nowak wrote:
>
> In C++ incremental rebuilds are simple as you compile each file individually anyhow, but that's the crux for why C++ compilations are so slow in the first place.
> Compiling multiple modules at once provides lots of speedups as you do not have to reparse and analyze common/mutual imports, but on the downside it cannot be parallelized that well.
>

I wish I knew how Delphi was compiling things because it is by far the fastest compiler I have ever tried. It compiled individual files as well but not into obj files but some dcu files and it used them if source wasn't changed when compiling sources that depended on that module.
October 23, 2017
On 10/18/2017 1:56 AM, Satoshi wrote:
> Unable to publish closed source library without workaround and ugly PIMPL design.


Consider this:

----------- file s.d ------------
  struct S {
    int x;
    this(int x) { this.x = x; }
    int getX() { return x; }
  }
----------- file s.di ------------
  struct S {
    this(int x);
    int getX();
  }
--------------------------

User code:

    import s;
    S s = S(3);
    writeln(s.getX());

Ta dah! Implementation is hidden, no PIMPL. Of course, inlining of the member functions won't work, but it won't work in C++, either, when this technique is used.

I.e. you can use .di/.d files just like you'd use .h/.cpp in C++. The technique works with classes, too.
October 23, 2017
On Monday, 23 October 2017 at 12:48:33 UTC, Atila Neves wrote:
> On Monday, 23 October 2017 at 09:13:45 UTC, Satoshi wrote:
>> On Wednesday, 18 October 2017 at 08:56:21 UTC, Satoshi wrote:
>>> [...]
>>
>> Whats about this one?
>>
>> auto foo = 42;
>> auto bar = "bar";
>> writeln(`Foo is {foo} and bar is {bar}`);
>
> writeln("Foo is ", foo, "and bar is ", bar");
>
> Two more characters.
>
> Atila

Okay, but what about now?

void sendAMessage(string message)
{
    ....
}

Guess sendAMessage("Foo is", foo, "and bar is", bar); won't work.

However sendAMessage(`Foo is {foo} and bar is {bar}`); would have.

Your example is a common "counter-answer" to string interpolation, but it's missing the key point that you don't always use it for printing stuff.

October 23, 2017
On Monday, 23 October 2017 at 21:14:18 UTC, bauss wrote:
> On Monday, 23 October 2017 at 12:48:33 UTC, Atila Neves wrote:
>> On Monday, 23 October 2017 at 09:13:45 UTC, Satoshi wrote:
>>> On Wednesday, 18 October 2017 at 08:56:21 UTC, Satoshi wrote:
>>>> [...]
>>>
>>> Whats about this one?
>>>
>>> auto foo = 42;
>>> auto bar = "bar";
>>> writeln(`Foo is {foo} and bar is {bar}`);
>>
>> writeln("Foo is ", foo, "and bar is ", bar");
>>
>> Two more characters.
>>
>> Atila
>
> Okay, but what about now?
>
> void sendAMessage(string message)
> {
>     ....
> }

sendAMessage(text(...));

Atila


October 23, 2017
On 10/23/17 08:21, Kagamin wrote:
> On Friday, 20 October 2017 at 09:49:34 UTC, Adam Wilson wrote:
>> Others are less obvious, for example, async/await is syntax sugar for
>> a collection of Task-based idioms in C#.
>
> Now I think it's doesn't fit D. async/await wasn't made for performance,
> but for conservation of thread resources, async calls are rather
> expensive, which doesn't fit in D if we prefer raw performance. Also I
> found another shortcoming: it doesn't interoperate well with cache:
> cache flip flops between synchronous and asynchronous operation: when
> you hit cache it's synchronous, when you miss it it performs IO.

Actually I think it fits perfectly with D, not for reason of performance, but for reason of flexibility. D is a polyglot language, with by far the most number of methodologies supported in a single language that I've ever encountered.

Additionally, MSFT/C# fully recognizes that the benefits of Async/Await have never been and never were intended to be for performance. Async/Await trades raw performance for an ability to handle a truly massive number of simultaneous tasks. And it is easy to implement both blocking and non-blocking calls side-by-side (MSFT appends 'Async' to the blocking call name).

Here is the thing. Many projects (particularly web-scale) are not all that sensitive to latency. Adding 10ms to the total call duration isn't going to effect the user experience much when you've got 500ms of IO calls to make. But blocking calls will lock up a thread for those 500ms. That can disastrous when you have thousands of calls coming in every second to each machine.

On the flip side, if you're a financial service corp with millions to throw at hardware and an extreme latency sensitivity, you'll go for the blocking calls, because they absolutely do cost less in overall milliseconds. And you'll make up for the thread blockages by throwing an obscene amount of hardware at the problem. Because hey, you're a multi-billion dollar corp, you'll make back the few million you spent on over-provisioning hardware in a day or two.

The point is that not everyone wants, or needs, maximum raw performance per individual task. In the spirit of flexibility, D needs to provide the other choice, because it's not our job to tell our users how to run their business.

-- 
Adam Wilson
IRC: LightBender
import quiet.dlang.dev;
October 23, 2017
On Monday, 23 October 2017 at 22:22:55 UTC, Adam Wilson wrote:
>Additionally, MSFT/C# fully recognizes that the benefits of Async/Await have never been and never were intended to be for performance. Async/Await trades raw performance for an ability to handle a truly massive number of simultaneous tasks.

Could you clarify this? Do you mean it's not supposed to have better performance for small numbers of tasks, but there is supposed to be some high threshold of tasks/second at which either throughput or latency is better?